Search results for: normal and inverse Weibull models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3506

Search results for: normal and inverse Weibull models

2246 Roles and Responsibilities to Success of IT Project in an Organization

Authors: Vahhab Attar Olyaee, Fouad Attar Olyaee

Abstract:

Many IT projects come to failure because of having technical approach, focusing on the final product and lack of proper attention to strategic alignment. Project management models quite often have technical management view [4], [8], [13], [14]. These models focus greatly on the finalization of the project product and the delivery of the product to the customer. However, many project problems are due to lack of attention to the needs and capabilities of the organizations or disregarding how to deploy and use the product in the organization. In this regard, in the current research we are trying to present a solution with the purpose of raising the value of the project in an organization. This way, the project outputs will be properly deployed in the organization. Therefore, a comprehensive model is presented which takes into account the whole processes from initial step of project definition to the deployment of the final outputs in the organization and then the definition of all roles and responsibilities to put the model into practice. Taking into account the opinions of experts and project managers, to prove the performance of the model, the project problems were recognized and based on the model, categorized and analyzed. And at the end it is made clear that ignoring the proper definition of the project and not having a proper understanding of the expected value on the one hand and not supervising the emerged value in the process of production and installment are among the most important factors that bring a project to failure.

Keywords: IT Governance, Project Model, Roles and Responsibilities of Project

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
2245 Angiographic Evaluation of ETT (Treadmill) Positive Patients in a Tertiary Care Hospital of Bangladesh

Authors: Syed Dawood Md. Taimur, Saidur Rahman Khan, Farzana Islam

Abstract:

To evaluate the factors which predetermine the coronary artery disease in patients having positive Exercise Tolerance Test (ETT) that is treadmill results and coronary artery findings. This descriptive study was conducted at Department of Cardiology, Ibrahim Cardiac Hospital & Research Institute, Dhaka, Bangladesh from 1st January, 2014 to 31st August, 2014. All patients who had done ETT (treadmill) for chest pain diagnosis were studied. One hundred and four patients underwent coronary angiogram after positive treadmill result. Patients were divided into two groups depending upon the angiographic findings, i.e. true positive and false positive. Positive treadmill test patients who have coronary artery involvement these are called true positive and who have no involvement they are called false positive group. Both groups were compared with each other. Out of 104 patients, 81 (77.9%) patients had true positive ETT and 23 (22.1%) patients had false positive ETT. The mean age of patients in positive ETT was 53.46± 8.06 years and male mean age was 53.63±8.36 years and female was 52.87±7.0 years. Sixty nine (85.19%) male patients and twelve (14.81%) female patients had true positive ETT, whereas 15 (65.21%) males and 8 (34.79%) females had false positive ETT, this was statistically significant (p<0.032) in the two groups (sex) in comparison of true and false positive ETT. The risk factors of these patients like diabetes mellitus, hypertension, dyslipidemia, family history and smoking were seen among these patients. Hypertensive patients having true positive which were statistically significant (p<0.004) and diabetic, dyslipidemic patients having true positive which were statistically significant (p<0.032 & 0.030).True positive patients had family history were 68(83.95%) and smoking were 52 (64.20%), where family history patients had statistically significant (p<0.017) between two groups of patients and smokers were significant (p<0.012). 46 true positive patients achieved THR which was not statistically significant (P<0.138) and 79 true patients had abnormal resting ECG whether it was significant (p<0.036). Amongst the vessels involvement the most common was LAD 55 (67.90 %) followed by LCX 42 (51.85%), RCA 36 (44.44%), and the LMCA was 9 (11.11%). 40 patients (49.38%) had SVD, 26 (30.10%) had DVD, 15(18.52%) had TVD and 23 had normal coronary arteries. It can be concluded that among the female patients who have positive ETT with normal resting ECG, who had achieved target heart rate are likely to have a false positive test result. Conversely male patients, resting abnormal ECG who had not achieved THR, symptom limited ETT, have a hypertension, diabetes, dyslipidemia, family history and smoking are likely to have a true positive treadmill test result.

Keywords: Exercise tolerance test, Coronary artery disease, Coronary angiography, True positive, False positive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3945
2244 Estimation of Vertical Handover Probability in an Integrated UMTS and WLAN Networks

Authors: Diganta Kumar Pathak, Manashjyoti Bhuyan, Vaskar Deka

Abstract:

Vertical Handover(VHO) among different communication technologies ensuring uninterruption and service continuity is one of the most important performance parameter in Heterogenous networks environment. In an integrated Universal Mobile Telecommunicatin System(UMTS) and Wireless Local Area Network(WLAN), WLAN is given an inherent priority over UMTS because of its high data rates with low cost. Therefore mobile users want to be associated with WLAN maximum of the time while roaming, to enjoy best possible services with low cost. That encourages reduction of number of VHO. In this work the reduction of number of VHO with respect to varying number of WLAN Access Points(APs) in an integrated UMTS and WLAN network is investigated through simulation to provide best possible cost effective service to the users. The simulation has been carried out for an area (7800 × 9006)m2 where COST-231 Hata model and 3GPP (TR 101 112 V 3.1.0) specified models are used for WLAN and UMTS path loss models respectively. The handover decision is triggered based on the received signal level as compared to the fade margin. Fade margin gives a probabilistic measure of the reliability of the communication link. A relationship between number of WLAN APs and the number of VHO is also established in this work.

Keywords: VHO, UMTS, WLAN, MT, AP, BS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2029
2243 Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: Satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2337
2242 Impact of Enhanced Business Models on Technology Companies in the Pandemic: A Case Study about the Revolutionary Change in Management Styles

Authors: Murat Colak, Berkay Cakir Saridogan

Abstract:

Since the dawn of modern corporations, almost every single employee has been working in the same loop, which contains three basic steps: going to work, providing the needs for the work, and getting back home. Only a small amount of people was able to break that standard and live outside the box. As the 2019 pandemic hit the Earth and most companies shut down their physical offices, that loop had to change for everyone. This means that the old management styles had to be significantly re-arranged to the "work from home" type of business methods. The methods include online conferences and meetings, time and task tracking using algorithms, globalization of the work, and, most importantly, remote working. After the global epidemic started, even the tech giants were concerned. Now, it can be seen that those technology companies have an incredible step-up in their shares compared to the other companies because they know how to manage such situations even better than every other industry. This study aims to take the old traditional management styles in big companies and compare them with the post-Covid methods (2019-2022). As a result of this comparison made using the annual reports and shared statistics, this study aims to explain why the winners of this crisis are the technology companies.

Keywords: COVID-19, technology companies, business models, remote work.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 372
2241 Simulating the Dynamics of Distribution of Hazardous Substances Emitted by Motor Engines in a Residential Quarter

Authors: S. Grishin

Abstract:

This article is dedicated to development of mathematical models for determining the dynamics of concentration of hazardous substances in urban turbulent atmosphere. Development of the mathematical models implied taking into account the time-space variability of the fields of meteorological items and such turbulent atmosphere data as vortex nature, nonlinear nature, dissipativity and diffusivity. Knowing the turbulent airflow velocity is not assumed when developing the model. However, a simplified model implies that the turbulent and molecular diffusion ratio is a piecewise constant function that changes depending on vertical distance from the earth surface. Thereby an important assumption of vertical stratification of urban air due to atmospheric accumulation of hazardous substances emitted by motor vehicles is introduced into the mathematical model. The suggested simplified non-linear mathematical model of determining the sought exhaust concentration at a priori unknown turbulent flow velocity through non-degenerate transformation is reduced to the model which is subsequently solved analytically.

Keywords: Urban ecology, time-dependent mathematical model, exhaust concentration, turbulent and molecular diffusion, airflow velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
2240 A Mathematical Model for Predicting Isothermal Soil Moisture Profiles Using Finite Difference Method

Authors: Kasthurirangan Gopalakrishnan, Anshu Manik

Abstract:

Subgrade moisture content varies with environmental and soil conditions and has significant influence on pavement performance. Therefore, it is important to establish realistic estimates of expected subgrade moisture contents to account for the effects of this variable on predicted pavement performance during the design stage properly. The initial boundary soil suction profile for a given pavement is a critical factor in determining expected moisture variations in the subgrade for given pavement and climatic and soil conditions. Several numerical models have been developed for predicting water and solute transport in saturated and unsaturated subgrade soils. Soil hydraulic properties are required for quantitatively describing water and chemical transport processes in soils by the numerical models. The required hydraulic properties are hydraulic conductivity, water diffusivity, and specific water capacity. The objective of this paper was to determine isothermal moisture profiles in a soil fill and predict the soil moisture movement above the ground water table using a simple one-dimensional finite difference model.

Keywords: Fill, Hydraulic Conductivity, Pavement, Subgrade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
2239 Wireless Sensor Network to Help Low Incomes Farmers to Face Drought Impacts

Authors: Fantazi Walid, Ezzedine Tahar, Bargaoui Zoubeida

Abstract:

This research presents the main ideas to implement an intelligent system composed by communicating wireless sensors measuring environmental data linked to drought indicators (such as air temperature, soil moisture , etc...). On the other hand, the setting up of a spatio temporal database communicating with a Web mapping application for a monitoring in real time in activity 24:00 /day, 7 days/week is proposed to allow the screening of the drought parameters time evolution and their extraction. Thus this system helps detecting surfaces touched by the phenomenon of drought. Spatio-temporal conceptual models seek to answer the users who need to manage soil water content for irrigating or fertilizing or other activities pursuing crop yield augmentation. Effectively, spatiotemporal conceptual models enable users to obtain a diagram of readable and easy data to apprehend. Based on socio-economic information, it helps identifying people impacted by the phenomena with the corresponding severity especially that this information is accessible by farmers and stakeholders themselves. The study will be applied in Siliana watershed Northern Tunisia.

Keywords: WSN, database spatio-temporal, GIS, web-mapping, indicator of drought.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2441
2238 On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation

Authors: Corneliu Sofronie, Roxana Zubcov

Abstract:

Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.

Keywords: complementary methodology, connection approach, networks without scaling, quantum psychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3640
2237 Assessing the Effect of Underground Tunnel Diameter on Structure-Foundation-Soil Performance under the Kobe Earthquake

Authors: Masoud Mahdavi

Abstract:

Today, developed and industrial cities have all kinds of sewage and water transfer canals, subway tunnels, infrastructure facilities, etc., which have caused underground cavities to be created under the buildings. The presence of these cavities causes behavioral changes in the structural behavior that must be fully evaluated. In the present study, using Abaqus finite element software, the effect of cavities with 0.5 and 1.5 meters in diameter at a depth of 2.5 meters from the earth's surface (with a circular cross-section) on the performance of the foundation and the ground (soil) has been evaluated. For this purpose, the Kobe earthquake was applied to the models for 10 seconds. Also, pore water pressure and weight were considered on the models to get complete results. The results showed that by creating and increasing the diameter of circular cavities in the soil, three indicators; 1) von Mises stress, 2) displacement and 3) plastic strain have had oscillating, ascending and ascending processes, respectively, which shows the relationship between increasing the diameter index of underground cavities and structural indicators of structure-foundation-soil.

Keywords: Underground excavations, foundation, structural substrates, Abaqus software, Kobe earthquake, time history analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 670
2236 Memory Estimation of Internet Server Using Queuing Theory: Comparative Study between M/G/1, G/M/1 and G/G/1 Queuing Model

Authors: L. K. Singh, Riktesh Srivastava

Abstract:

How to effectively allocate system resource to process the Client request by Gateway servers is a challenging problem. In this paper, we propose an improved scheme for autonomous performance of Gateway servers under highly dynamic traffic loads. We devise a methodology to calculate Queue Length and Waiting Time utilizing Gateway Server information to reduce response time variance in presence of bursty traffic. The most widespread contemplation is performance, because Gateway Servers must offer cost-effective and high-availability services in the elongated period, thus they have to be scaled to meet the expected load. Performance measurements can be the base for performance modeling and prediction. With the help of performance models, the performance metrics (like buffer estimation, waiting time) can be determined at the development process. This paper describes the possible queue models those can be applied in the estimation of queue length to estimate the final value of the memory size. Both simulation and experimental studies using synthesized workloads and analysis of real-world Gateway Servers demonstrate the effectiveness of the proposed system.

Keywords: M/M/1, M/G/1, G/M/1, G/G/1, Gateway Servers, Buffer Estimation, Waiting Time, Queuing Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
2235 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance

Authors: Sokkhey Phauk, Takeo Okazaki

Abstract:

The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.

Keywords: Academic performance prediction system, prediction model, educational data mining, dominant factors, feature selection methods, student performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957
2234 Recognition of Gene Names from Gene Pathway Figures Using Siamese Network

Authors: Muhammad Azam, Micheal Olaolu Arowolo, Fei He, Mihail Popescu, Dong Xu

Abstract:

The number of biological papers is growing quickly, which means that the number of biological pathway figures in those papers is also increasing quickly. Each pathway figure shows extensive biological information, like the names of genes and how the genes are related. However, manually annotating pathway figures takes a lot of time and work. Even though using advanced image understanding models could speed up the process of curation, these models still need to be made more accurate. To improve gene name recognition from pathway figures, we applied a Siamese network to map image segments to a library of pictures containing known genes in a similar way to person recognition from photos in many photo applications. We used a triple loss function and a triplet spatial pyramid pooling network by combining the triplet convolution neural network and the spatial pyramid pooling (TSPP-Net). We compared VGG19 and VGG16 as the Siamese network model. VGG16 achieved better performance with an accuracy of 93%, which is much higher than Optical Character Recognition (OCR) results.

Keywords: Biological pathway, image understanding, gene name recognition, object detection, Siamese network, Visual Geometry Group.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652
2233 Multistage Condition Monitoring System of Aircraft Gas Turbine Engine

Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev

Abstract:

Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.

Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
2232 Targeting the Pulmonary Delivery via Optimizing Physicochemical Characteristics of Instilled Liquid and Exploring Distribution of Produced Liquids by Bench-Top Models and Scintigraphy of Rabbits- Lungs

Authors: Mohammad Nasri, Hossein Mirshekarpour

Abstract:

We aimed to investigate how can target and optimize pulmonary delivery distribution by changing physicochemical characteristics of instilled liquid.Therefore, we created a new liquids group: a. eligible for desired distribution within lung because of assorted physicochemical characteristics b. capable of being augmented with a broad range of chemicals inertly c. no interference on respiratory function d. compatible with airway surface liquid We developed forty types of new liquid,were composed of Carboxymethylcellulose sodium,Glycerin and different types of Polysorbates.Viscosity was measured using a Programmable Rheometer and surface tension by KRUSS Tensiometer.We subsequently examined the liquids and delivery protocols by simple and branched glass capillary tube models of airways.Eventually,we explored pulmonary distribution of liquids being augmented with technetium-99m in mechanically ventilated rabbits.We used a single head large field of view gamma camera.Kinematic viscosity between 0.265Stokes and 0.289Stokes,density between 1g/cm3 and 1.5g/cm3 and surface tension between 25dyn/cm and 35dyn/cm were the most acceptable.

Keywords: Pulmonary delivery, Liquid instillation into airway, Physicochemical characteristics, Optimal distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
2231 Wood Ashes from Electrostatic Filter as a Replacement for the Fly Ashes in Concrete

Authors: Piotr-Robert Lazik, Harald Garrecht

Abstract:

Many concrete technologists are looking for a solution to replace Fly Ashes that would be unavailable in a few years as an element that occurs as a major component of many types of concrete. The importance of such component is clear - it saves cement and reduces the amount of CO2 in the atmosphere that occurs during cement production. Wood Ashes from electrostatic filter can be used as a valuable substitute in concrete. The laboratory investigations showed that the wood ash concrete had a compressive strength comparable to coal fly ash concrete. These results indicate that wood ash can be used to manufacture normal concrete.

Keywords: Wood ashes, fly ashes, electric filter, replacement, concrete technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
2230 A Unified Approach for Naval Telecommunication Architectures

Authors: Y. Lacroix, J.-F. Malbranque

Abstract:

We present a chronological evolution for naval telecommunication networks. We distinguish periods: with or without multiplexers, with switch systems, with federative systems, with medium switching, and with medium switching with wireless networks. This highlights the introduction of new layers and technology in the architecture. These architectures are presented using layer models of transmission, in a unified way, which enables us to integrate pre-existing models. A ship of a naval fleet has internal communications (i.e. applications' networks of the edge) and external communications (i.e. the use of the means of transmission between edges). We propose architectures, deduced from the layer model, which are the point of convergence between the networks on board and the HF, UHF radio, and satellite resources. This modelling allows to consider end-to-end naval communications, and in a more global way, that is from the user on board towards the user on shore, including transmission and networks on the shore side. The new architectures need take care of quality of services for end-to-end communications, the more remote control develops a lot and will do so in the future. Naval telecommunications will be more and more complex and will use more and more advanced technologies, it will thus be necessary to establish clear global communication schemes to grant consistency of the architectures. Our latest model has been implemented in a military naval situation, and serves as the basic architecture for the RIFAN2 network.

Keywords: Equilibrium beach profile, eastern tombolo of Giens, potential function, erosion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 857
2229 Technique for Online Condition Monitoring of Surge Arrestors

Authors: Anil S. Khopkar, Kartik S. Pandya

Abstract:

Lightning overvoltage phenomenon in power systems cannot be avoided; however, it can be controlled to certain extent. To prevent system failure, power system equipment must be protected against overvoltage. Metal Oxide Surge Arrestors (MOSA) are connected in the system to provide protection against overvoltages. Under normal working conditions, MOSA function as, insulators, offering a conductive path during overvoltage events. MOSA consists of zinc oxide elements (ZnO Blocks) which has non-linear V-I characteristics. The ZnO blocks are connected in series and fitted in ceramic or polymer housing. Over time, these components degrade due to continuous operation. The degradation of zinc oxide elements increases the leakage current flowing through the surge arrestors. This increased leakage current results in elevated temperatures within the surge arrester, further decreasing the resistance of the zinc oxide elements. Consequently, the leakage current increases, leading to higher temperatures within the MOSA. This cycle creates thermal runaway conditions for the MOSA. Once a surge arrester reaches the thermal runaway condition, it cannot return to normal working conditions. This condition is a primary cause of premature failure of surge arrestors. Given that MOSA constitutes a core protective device for electrical power systems against transients, it contributes significantly to the reliable operation of power system networks. Therefore, periodic condition monitoring of surge arrestors is essential. Both online and offline condition monitoring techniques are available for surge arrestors. Offline condition monitoring techniques are not as popular because they require the removal of surge arrestors from the system, which requires system shutdown. Therefore, online condition monitoring techniques are more commonly used. This paper presents an evaluation technique for the surge arrester condition based on leakage current analysis. The maximum amplitudes of total leakage current (IT), fundamental resistive leakage current (IR), and third harmonic resistive leakage current (I3rd) are analyzed as indicators for surge arrester condition monitoring.

Keywords: Metal Oxide Surge Arrester, MOSA, Over voltage, total leakage current, resistive leakage current, third harmonic resistive leakage current, capacitive leakage current.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 65
2228 Bayesian Network Model for Students- Laboratory Work Performance Assessment: An Empirical Investigation of the Optimal Construction Approach

Authors: Ifeyinwa E. Achumba, Djamel Azzi, Rinat Khusainov

Abstract:

There are three approaches to complete Bayesian Network (BN) model construction: total expert-centred, total datacentred, and semi data-centred. These three approaches constitute the basis of the empirical investigation undertaken and reported in this paper. The objective is to determine, amongst these three approaches, which is the optimal approach for the construction of a BN-based model for the performance assessment of students- laboratory work in a virtual electronic laboratory environment. BN models were constructed using all three approaches, with respect to the focus domain, and compared using a set of optimality criteria. In addition, the impact of the size and source of the training, on the performance of total data-centred and semi data-centred models was investigated. The results of the investigation provide additional insight for BN model constructors and contribute to literature providing supportive evidence for the conceptual feasibility and efficiency of structure and parameter learning from data. In addition, the results highlight other interesting themes.

Keywords: Bayesian networks, model construction, parameterlearning, structure learning, performance index, model comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
2227 Discovery of Human HMG-Coa Reductase Inhibitors Using Structure-Based Pharmacophore Modeling Combined with Molecular Dynamics Simulation Methodologies

Authors: Minky Son, Chanin Park, Ayoung Baek, Shalini John, Keun Woo Lee

Abstract:

3-hydroxy-3-methylglutaryl coenzyme A reductase (HMGR) catalyzes the conversion of HMG-CoA to mevalonate using NADPH and the enzyme is involved in rate-controlling step of mevalonate. Inhibition of HMGR is considered as effective way to lower cholesterol levels so it is drug target to treat hypercholesterolemia, major risk factor of cardiovascular disease. To discover novel HMGR inhibitor, we performed structure-based pharmacophore modeling combined with molecular dynamics (MD) simulation. Four HMGR inhibitors were used for MD simulation and representative structure of each simulation were selected by clustering analysis. Four structure-based pharmacophore models were generated using the representative structure. The generated models were validated used in virtual screening to find novel scaffolds for inhibiting HMGR. The screened compounds were filtered by applying drug-like properties and used in molecular docking. Finally, four hit compounds were obtained and these complexes were refined using energy minimization. These compounds might be potential leads to design novel HMGR inhibitor.

Keywords: Anti-hypercholesterolemia drug, HMGR inhibitor, Molecular dynamics simulation, Structure-based pharmacophore modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
2226 Lower energy Gait Pattern Generation in 5-Link Biped Robot Using Image Processing

Authors: Byounghyun Kim, Youngjoon Han, Hernsoo Hahn

Abstract:

The purpose of this study is to find natural gait of biped robot such as human being by analyzing the COG (Center Of Gravity) trajectory of human being's gait. It is discovered that human beings gait naturally maintain the stability and use the minimum energy. This paper intends to find the natural gait pattern of biped robot using the minimum energy as well as maintaining the stability by analyzing the human's gait pattern that is measured from gait image on the sagittal plane and COG trajectory on the frontal plane. It is not possible to apply the torques of human's articulation to those of biped robot's because they have different degrees of freedom. Nonetheless, human and 5-link biped robots are similar in kinematics. For this, we generate gait pattern of the 5-link biped robot by using the GA algorithm of adaptation gait pattern which utilize the human's ZMP (Zero Moment Point) and torque of all articulation that are measured from human's gait pattern. The algorithm proposed creates biped robot's fluent gait pattern as that of human being's and to minimize energy consumption because the gait pattern of the 5-link biped robot model is modeled after consideration about the torque of human's each articulation on the sagittal plane and ZMP trajectory on the frontal plane. This paper demonstrate that the algorithm proposed is superior by evaluating 2 kinds of the 5-link biped robot applied to each gait patterns generated both in the general way using inverse kinematics and in the special way in which by considering visuality and efficiency.

Keywords: 5-link biped robot, gait pattern, COG (Center OfGravity), ZMP (Zero Moment Point).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
2225 Optimal Consume of NaOH in Starches Gelatinization for Froth Flotation

Authors: André C. Silva, Débora N. Sousa, Elenice M. S. Silva, Thales P. Fontes, Raphael S. Tomaz

Abstract:

Starches are widely used as depressant in froth flotation operations in Brazil due to their efficiency, increasing the selectivity in the inverse flotation of quartz depressing iron ore. Starches market have been growing and improving in recent years, leading to better products attending the requirements of the mineral industry. The major source of starch used for iron ore is corn starch, which needs to be gelatinized with sodium hydroxide (NaOH) prior to use. This stage has a direct impact on industrials costs, once the lowest consumption of NaOH in gelatinization provides better control of the pH in the froth flotation and reduces the amount of electrolytes present in the pulp. In order to evaluate the gelatinization degree of different starches and flour were subjected to the addiction of NaOH and temperature variation experiments. Samples of starch (corn, cassava, HIPIX 100, HIPIX 101 and HIPIX 102 commercialized by Ingredion) and flour (cassava and potato) were tested. The starch samples were characterized through Scanning Electronic Microscopy and the amylose content were determined through spectrometry, swelling and solubility tests. The gelatinization was carried out through titration with NaOH, keeping the solution temperature constant at 40 oC. At the end of the tests, the optimal amount of NaOH consumed to gelatinize the starch or flour from different botanical sources was established and a correlation between the content of amylopectin in the starch and the starch/NaOH ratio needed for its gelatinization.

Keywords: Froth flotation, gelatinization, sodium hydroxide, starches and flours.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
2224 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1008
2223 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and Validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) where in the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation & Control design team. This paper discusses about the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), Steady State, Transient State.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514
2222 Development of Coronal Field and Solar Wind Components for MHD Interplanetary Simulations

Authors: Ljubomir Nikolic, Larisa Trichtchenko

Abstract:

The connection between solar activity and adverse phenomena in the Earth’s environment that can affect space and ground based technologies has spurred interest in Space Weather (SW) research. A great effort has been put on the development of suitable models that can provide advanced forecast of SW events. With the progress in computational technology, it is becoming possible to develop operational large scale physics based models which can incorporate the most important physical processes and domains of the Sun-Earth system. In order to enhance our SW prediction capabilities we are developing advanced numerical tools. With operational requirements in mind, our goal is to develop a modular simulation framework of propagation of the disturbances from the Sun through interplanetary space to the Earth. Here, we report and discuss on the development of coronal field and solar wind components for a large scale MHD code. The model for these components is based on a potential field source surface model and an empirical Wang-Sheeley-Arge solar wind relation. 

Keywords: Space weather, numerical modeling, coronal field, solar wind.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
2221 Evaluation of Fitts’ Law Index of Difficulty Formulation for Screen Size Variations

Authors: Hidehiko Okada, Takayuki Akiba

Abstract:

It is well-known as Fitts’ law that the time for a user to point a target on a GUI screen can be modeled as a linear function of “index of difficulty (ID).” In this paper, the authors investigate whether the traditional ID formulation is appropriate independently of device screen sizes. Result of our experiment reveals that the ID formulation may not consistently capture actual difficulty: users’ pointing performances are not consistent among pointing target variations of which index of difficulty are consistent. The term A/W may not be appropriate because the term causes the observed inconsistency. Based on this finding, the authors then evaluate the applicability of possible models other than Fitts’ one. Multiple regression models are found to be able to appropriately represent the effects of target design variations. The authors next make an attempt to improve the definition of ID in Fitts’ model. Our idea is to raise the size or the distance values depending on the screen size. The modified model is found to fit well to the users’ pointing data, which supports the idea. 

Keywords: Fitts’ law, pointing device, small screen, touch user interface, usability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
2220 Business Domain Modelling Using an Integrated Framework

Authors: Mohammed Salahat, Steve Wade

Abstract:

This paper presents an application of a “Systematic Soft Domain Driven Design Framework” as a soft systems approach to domain-driven design of information systems development. The framework use SSM as a guiding methodology within which we have embedded a sequence of design tasks based on the UML leading to the implementation of a software system using the Naked Objects framework. This framework have been used in action research projects that have involved the investigation and modelling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, and a real case study “Information Retrieval System for academic research” is used, in this paper, to show further practice and evaluation of the framework in different business domain. We argue that there are advantages from combining and using techniques from different methodologies in this way for business domain modelling. The framework is overviewed and justified as multimethodology using Mingers multimethodology ideas.

Keywords: SSM, UML, domain-driven design, soft domaindriven design, naked objects, soft language, information retrieval, multimethodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
2219 Medical Imaging Techniques in Clinical Medicine

Authors: Sharan Badiger, Prema T. Akkasaligar

Abstract:

Medical imaging technology has experienced a dramatic change in the last few years. Medical imaging refers to the techniques and processes used to create images of the human body (or parts thereof) for various clinical purposes such as medical procedures and diagnosis or medical science including the study of normal anatomy and function. With the growth of computers and image technology, medical imaging has greatly influenced the medical field. The diagnosis of a health problem is now highly dependent on the quality and the credibility of the image analysis. This paper deals with the various aspects and types of medical imaging.

Keywords: Computed Tomography, Echocardiography, Medical Imaging, Magnetic Resonance, Ultrasound Imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3557
2218 Evaluation of Newly Developed Dot-ELISA Test for Identification of Naja-naja sumantrana and Calloselasma rhodostoma Venom Antigens

Authors: A.S. Sikarwar, S. Ambu, T .H. Wong

Abstract:

Snake bite cases in Malaysia most often involve the species Naja-naja and Calloselasma rhodostoma. In keeping with the need for a rapid snake venom detection kit in a clinical setting, plate and dot-ELISA test for the venoms of Naja-naja sumatrana, Calloselasma rhodostoma and the cobra venom fraction V antigen was developed. Polyclonal antibodies were raised and further used to prepare the reagents for the dot-ELISA test kit which was tested in mice, rabbit and virtual human models. The newly developed dot- ELISA kit was able to detect a minimum venom concentration of 244ng/ml with cross reactivity of one antibody type. The dot-ELISA system was sensitive and specific for all three snake venom types in all tested animal models. The lowest minimum venom concentration detectable was in the rabbit model, 244ng/ml of the cobra venom fraction V antigen. The highest minimum venom concentration was in mice, 1953ng/ml against a multitude of venoms. The developed dot-ELISA system for the detection of three snake venom types was successful with a sensitivity of 95.8% and specificity of 97.9%.

Keywords: ELISA, Venom, SVDK, Naja-naja sumatrana , Calloselasma rhodostoma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085
2217 Constructing a Bayesian Network for Solar Energy in Egypt Using Life Cycle Analysis and Machine Learning Algorithms

Authors: Rawaa H. El-Bidweihy, Hisham M. Abdelsalam, Ihab A. El-Khodary

Abstract:

In an era where machines run and shape our world, the need for a stable, non-ending source of energy emerges. In this study, the focus was on the solar energy in Egypt as a renewable source, the most important factors that could affect the solar energy’s market share throughout its life cycle production were analyzed and filtered, the relationships between them were derived before structuring a Bayesian network. Also, forecasted models were built for multiple factors to predict the states in Egypt by 2035, based on historical data and patterns, to be used as the nodes’ states in the network. 37 factors were found to might have an impact on the use of solar energy and then were deducted to 12 factors that were chosen to be the most effective to the solar energy’s life cycle in Egypt, based on surveying experts and data analysis, some of the factors were found to be recurring in multiple stages. The presented Bayesian network could be used later for scenario and decision analysis of using solar energy in Egypt, as a stable renewable source for generating any type of energy needed.

Keywords: ARIMA, auto correlation, Bayesian network, forecasting models, life cycle, partial correlation, renewable energy, SARIMA, solar energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768