Search results for: input impedance
1066 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images
Authors: Shahriar Farzam, Maryam Rastgarpour
Abstract:
Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).Keywords: curvelet transform, CBCT, image enhancement, image denoising
Procedia PDF Downloads 3001065 Implant Operation Guiding Device for Dental Surgeons
Authors: Daniel Hyun
Abstract:
Dental implants are one of the top 3 reasons to sue a dentist for malpractice. It involves dental implant complications, usually because of the angle of the implant from the surgery. At present, surgeons usually use a 3D-printed navigator that is customized for the patient’s teeth. However, those can’t be reused for other patients as they require time. Therefore, I made a guiding device to assist the surgeon in implant operations. The surgeon can input the objective of the operation, and the device constantly checks if the surgery is heading towards the objective within the set range, telling the surgeon by manipulating the LED. We tested the prototypes’ consistency and accuracy by checking the graph, average standard deviation, and the average change of the calculated angles. The accuracy of performance was also acquired by running the device and checking the outputs. My first prototype used accelerometer and gyroscope sensors from the Arduino MPU6050 sensor, getting a changeable graph, achieving 0.0295 of standard deviations, 0.25 of average change, and 66.6% accuracy of performance. The second prototype used only the gyroscope, and it got a constant graph, achieved 0.0062 of standard deviation, 0.075 of average change, and 100% accuracy of performance, indicating that the accelerometer sensor aggravated the functionality of the device. Using the gyroscope sensor allowed it to measure the orientations of separate axes without affecting each other and also increased the stability and accuracy of the measurements.Keywords: implant, guide, accelerometer, gyroscope, handpiece
Procedia PDF Downloads 431064 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset
Procedia PDF Downloads 3531063 Interpretable Deep Learning Models for Medical Condition Identification
Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji
Abstract:
Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.Keywords: deep learning, interpretability, attention, big data, medical conditions
Procedia PDF Downloads 911062 Fire Characteristic of Commercial Retardant Flame Polycarbonate under Different Oxygen Concentration: Ignition Time and Heat Blockage
Authors: Xuelin Zhang, Shouxiang Lu, Changhai Li
Abstract:
The commercial retardant flame polycarbonate samples as the main high speed train interior carriage material with different thicknesses were investigated in Fire Propagation Apparatus with different external heat fluxes under different oxygen concentration from 12% to 40% to study the fire characteristics and quantitatively analyze the ignition time, mass loss rate and heat blockage. The additives of commercial retardant flame polycarbonate were intumescent and maintained a steady height before ignition when heated. The results showed the transformed ignition time (1/t_ig)ⁿ increased linearly with external flux under different oxygen concentration after deducting the heat blockage due to pyrolysis products, the mass loss rate was taken on linearly with external heat fluxes and the slop of the fitting line for mass loss rate and external heat fluxes decreased with the enhanced oxygen concentration and the heat blockage independent on external heat fluxes rose with oxygen concentration increasing. The inquired data as the input of the fire simulation model was the most important to be used to evaluate the fire risk of commercial retardant flame polycarbonate.Keywords: ignition time, mass loss rate, heat blockage, fire characteristic
Procedia PDF Downloads 2821061 Educational Experiences in Engineering in the COVID Era and Their Comparative Analysis, Spain, March to June 2020
Authors: Borja Bordel, Ramón Alcarria, Marina Pérez
Abstract:
In March 2020, in Spain, a sanitary and unexpected crisis caused by COVID-19 was declared. All of a sudden, all degrees, classes and evaluation tests and projects had to be transformed into online activities. However, the chaotic situation generated by a complex operation like that, executed without any well-established procedure, led to very different experiences and, finally, results. In this paper, we are describing three experiences in two different Universities in Madrid. On the one hand, the Technical University of Madrid, a public university with little experience in online education. On the other hand, Alfonso X el Sabio University, a private university with more than five years of experience in online teaching. All analyzed subjects were related to computer engineering. Professors and students answered a survey and personal interviews were also carried out. Besides, the professors’ workload and the students’ academic results were also compared. From the comparative analysis of all these experiences, we are extracting the most successful strategies, methodologies, and activities. The recommendations in this paper will be useful for courses during the next months when the sanitary situation is still affecting an educational organization. While, at the same time, they will be considered as input for the upcoming digitalization process of higher education.Keywords: educational experience, online education, higher education digitalization, COVID, Spain
Procedia PDF Downloads 1401060 Urban Resilince and Its Prioritised Components: Analysis of Industrial Township Greater Noida
Authors: N. Mehrotra, V. Ahuja, N. Sridharan
Abstract:
Resilience is an all hazard and a proactive approach, require a multidisciplinary input in the inter related variables of the city system. This research based to identify and operationalize indicators for assessment in domain of institutions, infrastructure and knowledge, all three operating in task oriented community networks. This paper gives a brief account of the methodology developed for assessment of Urban Resilience and its prioritized components for a target population within a newly planned urban complex integrating Surajpur and Kasna village as nodes. People’s perception of Urban Resilience has been examined by conducting questionnaire survey among the target population of Greater Noida. As defined by experts, Urban Resilience of a place is considered to be both a product and process of operation to regain normalcy after an event of disturbance of certain level. Based on this methodology, six indicators are identified that contribute to perception of urban resilience both as in the process of evolution and as an outcome. The relative significance of 6 R’ has also been identified. The dependency factor of various resilience indicators have been explored in this paper, which helps in generating new perspective for future research in disaster management. Based on the stated factors this methodology can be applied to assess urban resilience requirements of a well planned town, which is not an end in itself, but calls for new beginnings.Keywords: disaster, resilience, system, urban
Procedia PDF Downloads 4581059 Theoretical Analysis of the Solid State and Optical Characteristics of Calcium Sulpide Thin Film
Authors: Emmanuel Ifeanyi Ugwu
Abstract:
Calcium Sulphide which is one of Chalcogenide group of thin films has been analyzed in this work using a theoretical approach in which a scalar wave was propagated through the material thin film medium deposited on a glass substrate with the assumption that the dielectric medium has homogenous reference dielectric constant term, and a perturbed dielectric function, representing the deposited thin film medium on the surface of the glass substrate as represented in this work. These were substituted into a defined scalar wave equation that was solved first of all by transforming it into Volterra equation of second type and solved using the method of separation of variable on scalar wave and subsequently, Green’s function technique was introduced to obtain a model equation of wave propagating through the thin film that was invariably used in computing the propagated field, for different input wavelengths representing UV, Visible and Near-infrared regions of field considering the influence of the dielectric constants of the thin film on the propagating field. The results obtained were used in turn to compute the band gaps, solid state and optical properties of the thin film.Keywords: scalar wave, dielectric constant, calcium sulphide, solid state, optical properties
Procedia PDF Downloads 1181058 Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph
Authors: Mo-Se Lee, Cheol-Hwi Ahn, Kee-Young Kwahk, Hyunchul Ahn
Abstract:
Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market.Keywords: convolutional neural network, deep learning, Korean stock market, stock market prediction
Procedia PDF Downloads 4251057 Law and its Implementation and Consequences in Pakistan
Authors: Amir Shafiq, Asif Shahzad, Shabbar Mehmood, Muhammad Saeed, Hamid Mustafa
Abstract:
Legislation includes the law or the statutes which is being reputable by a sovereign authority and generally can be implemented by the courts of law time to time to accomplish the objectives. Historically speaking upon the emergence of Pakistan in 1947, the intact laws of the British Raj remained effective after ablution by Islamic Ideology. Thus, there was an intention to begin the statutes book afresh for Pakistan's legal history. In consequence thereof, the process of developing detailed plans, procedures and mechanisms to ensure legislative and regulatory requirements are achieved began keeping in view the cultural values and the local customs. This article is an input to the enduring discussion about implementing rule of law in Pakistan whereas; the rule of law requires the harmony of laws which is mostly in the arrangement of codified state laws. Pakistan has legal plural civilizations where completely different and independent systems of law like the Mohammadan law, the state law and the traditional law exist. The prevailing practiced law in Pakistan is actually the traditional law though the said law is not acknowledged by the State. This caused the main problem of the rule of law in the difference between the state laws and the cultural values. These values, customs and so-called traditional laws are the main obstacle to enforce the State law in true letter and spirit which has caused dissatisfaction of the masses and distrust upon the judicial system of the country.Keywords: consequences, implement, law, Pakistan
Procedia PDF Downloads 4331056 Estimations of Spectral Dependence of Tropospheric Aerosol Single Scattering Albedo in Sukhothai, Thailand
Authors: Siriluk Ruangrungrote
Abstract:
Analyses of available data from MFR-7 measurement were performed and discussed on the study of tropospheric aerosol and its consequence in Thailand. Since, ASSA (w) is one of the most important parameters for a determination of aerosol effect on radioactive forcing. Here the estimation of w was directly determined in terms of the ratio of aerosol scattering optical depth to aerosol extinction optical depth (ωscat/ωext) without any utilization of aerosol computer code models. This is of benefit for providing the elimination of uncertainty causing by the modeling assumptions and the estimation of actual aerosol input data. Diurnal w of 5 cloudless-days in winter and early summer at 5 distinct wavelengths of 415, 500, 615, 673 and 870 nm with the consideration of Rayleigh scattering and atmospheric column NO2 and Ozone contents were investigated, respectively. Besides, the tendency of spectral dependence of ω representing two seasons was observed. The characteristic of spectral results reveals that during wintertime the atmosphere of the inland rural vicinity for the period of measurement possibly dominated with a lesser amount of soil dust aerosols loading than one in early summer. Hence, the major aerosol loading particularly in summer was subject to a mixture of both soil dust and biomass burning aerosols.Keywords: aerosol scattering optical depth, aerosol extinction optical depth, biomass burning aerosol, soil dust aerosol
Procedia PDF Downloads 4051055 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 6471054 Facility Data Model as Integration and Interoperability Platform
Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes
Abstract:
Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.Keywords: airport ontology, energy management, facility data model, ontology modeling
Procedia PDF Downloads 4481053 Bacteriological Safety of Sachet Drinking Water Sold in Benin City, Nigeria
Authors: Stephen Olusanmi Akintayo
Abstract:
Access to safe drinking water remains a major challenge in Nigeria, and where available, the quality of the water is often in doubt. An alternative to the inadequate clean drinking water is being found in treated drinking water packaged in electrically heated sealed nylon and commonly referred to as “sachet water”. “Sachet water” is a common thing in Nigeria as the selling price is within the reach of members of the low socio- economic class and the setting up of a production unit does not require huge capital input. The bacteriological quality of selected “sachet water” stored at room temperature over a period of 56 days was determined to evaluate the safety of the sachet drinking water. Test for the detection of coliform bacteria was performed, and the result showed no coliform bacteria that indicates the absence of fecal contamination throughout 56 days. Heterotrophic plate count (HPC) was done at an interval 14 days, and the samples showed HPC between 0 cfu/mL and 64 cfu/mL. The highest count was observed on day 1. The count decreased between day 1 and 28, while no growths were observed between day 42 and 56. The decrease in HPC suggested the presence of residual disinfectant in the water. The organisms isolated were identified as Staphylococcus epidermis and S. aureus. The presence of these microorganisms in sachet water is indicative for contamination during processing and handling.Keywords: coliform, heterotrophic plate count, sachet water, Staphyloccocus aureus, Staphyloccocus epidermidis
Procedia PDF Downloads 3411052 The Use of Building Energy Simulation Software in Case Studies: A Literature Review
Authors: Arman Ameen, Mathias Cehlin
Abstract:
The use of Building Energy Simulation (BES) software has increased in the last two decades, parallel to the development of increased computing power and easy to use software applications. This type of software is primarily used to simulate the energy use and the indoor environment for a building. The rapid development of these types of software has raised their level of user-friendliness, better parameter input options and the increased possibility of analysis, both for a single building component or an entire building. This, in turn, has led to many researchers utilizing BES software in their research in various degrees. The aim of this paper is to carry out a literature review concerning the use of the BES software IDA Indoor Climate and Energy (IDA ICE) in the scientific community. The focus of this paper will be specifically the use of the software for whole building energy simulation, number and types of articles and publications dates, the area of application, types of parameters used, the location of the studied building, type of building, type of analysis and solution methodology. Another aspect that is examined, which is of great interest, is the method of validations regarding the simulation results. The results show that there is an upgoing trend in the use of IDA ICE and that researchers use the software in their research in various degrees depending on case and aim of their research. The satisfactory level of validation of the simulations carried out in these articles varies depending on the type of article and type of analysis.Keywords: building simulation, IDA ICE, literature review, validation
Procedia PDF Downloads 1361051 Reformulation of Theory of Critical Distances to Predict the Strength of Notched Plain Concrete Beams under Quasi Static Loading
Authors: Radhika V., J. M. Chandra Kishen
Abstract:
The theory of critical distances (TCD), due to its appealing characteristics, has been successfully used in the past to predict the strength of brittle as well as ductile materials, weakened by the presence of stress risers under both static and fatigue loading. By utilising most of the TCD's unique features, this paper summarises an attempt for a reformulation of the point method of the TCD to predict the strength of notched plain concrete beams under mode I quasi-static loading. A zone of micro cracks, which is responsible for the non-linearity of concrete, is taken into account considering the concept of an effective elastic crack. An attempt is also made to correlate the value of the material characteristic length required for the application of TCD with the maximum aggregate size in the concrete mix, eliminating the need for any extensive experimentation prior to the application of TCD. The devised reformulation and the proposed power law based relationship is found to yield satisfactory predictions for static strength of notched plain concrete beams, with geometric dimensions of the beam, tensile strength, and maximum aggregate size of the concrete mix being the only needed input parameters.Keywords: characteristic length, effective elastic crack, inherent material strength, modeI loading, theory of critical distances
Procedia PDF Downloads 981050 Image Segmentation Techniques: Review
Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo
Abstract:
Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.Keywords: clustering-based, convolution-network, edge-based, region-growing
Procedia PDF Downloads 961049 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region
Authors: Mohammad Bakhshi, Firas Al Janabi
Abstract:
High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.Keywords: DiMoN Tool, disaggregation, exceedance probability, Kolmogorov-Smirnov test, rainfall
Procedia PDF Downloads 2011048 Design of EV Steering Unit Using AI Based on Estimate and Control Model
Authors: Seong Jun Yoon, Jasurbek Doliev, Sang Min Oh, Rodi Hartono, Kyoojae Shin
Abstract:
Electric power steering (EPS), which is commonly used in electric vehicles recently, is an electric-driven steering device for vehicles. Compared to hydraulic systems, EPS offers advantages such as simple system components, easy maintenance, and improved steering performance. However, because the EPS system is a nonlinear model, difficult problems arise in controller design. To address these, various machine learning and artificial intelligence approaches, notably artificial neural networks (ANN), have been applied. ANN can effectively determine relationships between inputs and outputs in a data-driven manner. This research explores two main areas: designing an EPS identifier using an ANN-based backpropagation (BP) algorithm and enhancing the EPS system controller with an ANN-based Levenberg-Marquardt (LM) algorithm. The proposed ANN-based BP algorithm shows superior performance and accuracy compared to linear transfer function estimators, while the LM algorithm offers better input angle reference tracking and faster response times than traditional PID controllers. Overall, the proposed ANN methods demonstrate significant promise in improving EPS system performance.Keywords: ANN backpropagation modelling, electric power steering, transfer function estimator, electrical vehicle driving system
Procedia PDF Downloads 441047 Forecast of Polyethylene Properties in the Gas Phase Polymerization Aided by Neural Network
Authors: Nasrin Bakhshizadeh, Ashkan Forootan
Abstract:
A major problem that affects the quality control of polymer in the industrial polymerization is the lack of suitable on-line measurement tools to evaluate the properties of the polymer such as melt and density indices. Controlling the polymerization in ordinary method is performed manually by taking samples, measuring the quality of polymer in the lab and registry of results. This method is highly time consuming and leads to producing large number of incompatible products. An online application for estimating melt index and density proposed in this study is a neural network based on the input-output data of the polyethylene production plant. Temperature, the level of reactors' bed, the intensity of ethylene mass flow, hydrogen and butene-1, the molar concentration of ethylene, hydrogen and butene-1 are used for the process to establish the neural model. The neural network is taught based on the actual operational data and back-propagation and Levenberg-Marquart techniques. The simulated results indicate that the neural network process model established with three layers (one hidden layer) for forecasting the density and the four layers for the melt index is able to successfully predict those quality properties.Keywords: polyethylene, polymerization, density, melt index, neural network
Procedia PDF Downloads 1441046 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs
Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar
Abstract:
The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.Keywords: simulation, probability, confidence interval, sensitivity analysis
Procedia PDF Downloads 3821045 A Mixed Integer Programming Model for Optimizing the Layout of an Emergency Department
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
During the recent years, demand for healthcare services has dramatically increased. As the demand for healthcare services increases, so does the necessity of constructing new healthcare buildings and redesigning and renovating existing ones. Increasing demands necessitate the use of optimization techniques to improve the overall service efficiency in healthcare settings. However, high complexity of care processes remains the major challenge to accomplish this goal. This study proposes a method based on process mining results to address the high complexity of care processes and to find the optimal layout of the various medical centers in an emergency department. ProM framework is used to discover clinical pathway patterns and relationship between activities. Sequence clustering plug-in is used to remove infrequent events and to derive the process model in the form of Markov chain. The process mining results served as an input for the next phase which consists of the development of the optimization model. Comparison of the current ED design with the one obtained from the proposed method indicated that a carefully designed layout can significantly decrease the distances that patients must travel.Keywords: Mixed Integer programming, Facility layout problem, Process Mining, Healthcare Operation Management
Procedia PDF Downloads 3391044 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models
Authors: Jay L. Fu
Abstract:
Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction
Procedia PDF Downloads 1431043 CTHTC: A Convolution-Backed Transformer Architecture for Temporal Knowledge Graph Embedding with Periodicity Recognition
Authors: Xinyuan Chen, Mohd Nizam Husen, Zhongmei Zhou, Gongde Guo, Wei Gao
Abstract:
Temporal Knowledge Graph Completion (TKGC) has attracted increasing attention for its enormous value; however, existing models lack capabilities to capture both local interactions and global dependencies simultaneously with evolutionary dynamics, while the latest achievements in convolutions and Transformers haven't been employed in this area. What’s more, periodic patterns in TKGs haven’t been fully explored either. To this end, a multi-stage hybrid architecture with convolution-backed Transformers is introduced in TKGC tasks for the first time combining the Hawkes process to model evolving event sequences in a continuous-time domain. In addition, the seasonal-trend decomposition is adopted to identify periodic patterns. Experiments on six public datasets are conducted to verify model effectiveness against state-of-the-art (SOTA) methods. An extensive ablation study is carried out accordingly to evaluate architecture variants as well as the contributions of independent components in addition, paving the way for further potential exploitation. Besides complexity analysis, input sensitivity and safety challenges are also thoroughly discussed for comprehensiveness with novel methods.Keywords: temporal knowledge graph completion, convolution, transformer, Hawkes process, periodicity
Procedia PDF Downloads 781042 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser
Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett
Abstract:
Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser
Procedia PDF Downloads 1561041 Non-Targeted Adversarial Image Classification Attack-Region Modification Methods
Authors: Bandar Alahmadi, Lethia Jackson
Abstract:
Machine Learning model is used today in many real-life applications. The safety and security of such model is important, so the results of the model are as accurate as possible. One challenge of machine learning model security is the adversarial examples attack. Adversarial examples are designed by the attacker to cause the machine learning model to misclassify the input. We propose a method to generate adversarial examples to attack image classifiers. We are modifying the successfully classified images, so a classifier misclassifies them after the modification. In our method, we do not update the whole image, but instead we detect the important region, modify it, place it back to the original image, and then run it through a classifier. The algorithm modifies the detected region using two methods. First, it will add abstract image matrix on back of the detected image matrix. Then, it will perform a rotation attack to rotate the detected region around its axes, and embed the trace of image in image background. Finally, the attacked region is placed in its original position, from where it was removed, and a smoothing filter is applied to smooth the background with foreground. We test our method in cascade classifier, and the algorithm is efficient, the classifier confident has dropped to almost zero. We also try it in CNN (Convolutional neural network) with higher setting and the algorithm was successfully worked.Keywords: adversarial examples, attack, computer vision, image processing
Procedia PDF Downloads 3391040 Modeling Pan Evaporation Using Intelligent Methods of ANN, LSSVM and Tree Model M5 (Case Study: Shahroud and Mayamey Stations)
Authors: Hamidreza Ghazvinian, Khosro Ghazvinian, Touba Khodaiean
Abstract:
The importance of evaporation estimation in water resources and agricultural studies is undeniable. Pan evaporation are used as an indicator to determine the evaporation of lakes and reservoirs around the world due to the ease of interpreting its data. In this research, intelligent models were investigated in estimating pan evaporation on a daily basis. Shahroud and Mayamey were considered as the studied cities. These two cities are located in Semnan province in Iran. The mentioned cities have dry weather conditions that are susceptible to high evaporation potential. Meteorological data of 11 years of synoptic stations of Shahrood and Mayamey cities were used. The intelligent models used in this study are Artificial Neural Network (ANN), Least Squares Support Vector Machine (LSSVM), and M5 tree models. Meteorological parameters of minimum and maximum air temperature (Tmax, Tmin), wind speed (WS), sunshine hours (SH), air pressure (PA), relative humidity (RH) as selected input data and evaporation data from pan (EP) to The output data was considered. 70% of data is used at the education level, and 30 % of the data is used at the test level. Models used with explanation coefficient evaluation (R2) Root of Mean Squares Error (RMSE) and Mean Absolute Error (MAE). The results for the two Shahroud and Mayamey stations showed that the above three models' operations are rather appropriate.Keywords: pan evaporation, intelligent methods, shahroud, mayamey
Procedia PDF Downloads 741039 Reducing Support Structures in Design for Additive Manufacturing: A Neural Networks Approach
Authors: Olivia Borgue, Massimo Panarotto, Ola Isaksson
Abstract:
This article presents a neural networks-based strategy for reducing the need for support structures when designing for additive manufacturing (AM). Additive manufacturing is a relatively new and immature industrial technology, and the information to make confident decisions when designing for AM is limited. This lack of information impacts especially the early stages of engineering design, for instance, it is difficult to actively consider the support structures needed for manufacturing a part. This difficulty is related to the challenge of designing a product geometry accounting for customer requirements, manufacturing constraints and minimization of support structure. The approach presented in this article proposes an automatized geometry modification technique for reducing the use of the support structures while designing for AM. This strategy starts with a neural network-based strategy for shape recognition to achieve product classification, using an STL file of the product as input. Based on the classification, an automatic part geometry modification based on MATLAB© is implemented. At the end of the process, the strategy presents different geometry modification alternatives depending on the type of product to be designed. The geometry alternatives are then evaluated adopting a QFD-like decision support tool.Keywords: additive manufacturing, engineering design, geometry modification optimization, neural networks
Procedia PDF Downloads 2531038 Towards Human-Interpretable, Automated Learning of Feedback Control for the Mixing Layer
Authors: Hao Li, Guy Y. Cornejo Maceda, Yiqing Li, Jianguo Tan, Marek Morzynski, Bernd R. Noack
Abstract:
We propose an automated analysis of the flow control behaviour from an ensemble of control laws and associated time-resolved flow snapshots. The input may be the rich database of machine learning control (MLC) optimizing a feedback law for a cost function in the plant. The proposed methodology provides (1) insights into the control landscape, which maps control laws to performance, including extrema and ridge-lines, (2) a catalogue of representative flow states and their contribution to cost function for investigated control laws and (3) visualization of the dynamics. Key enablers are classification and feature extraction methods of machine learning. The analysis is successfully applied to the stabilization of a mixing layer with sensor-based feedback driving an upstream actuator. The fluctuation energy is reduced by 26%. The control replaces unforced Kelvin-Helmholtz vortices with subsequent vortex pairing by higher-frequency Kelvin-Helmholtz structures of lower energy. These efforts target a human interpretable, fully automated analysis of MLC identifying qualitatively different actuation regimes, distilling corresponding coherent structures, and developing a digital twin of the plant.Keywords: machine learning control, mixing layer, feedback control, model-free control
Procedia PDF Downloads 2231037 Economics and Management Information Systems: Institute of Management and Technology Enugu a Case Study
Authors: Cletus Agbowo
Abstract:
Standard principles, rules, regulations, norms and guides are necessities in practice especially in the Economics and management information system Institute of management of and technology (IMT) Enugu a case sturdy as presented by the presenter. Without mincing words, the fundamental bottle neck of management is economics, how to select to engage merger productivity resources to achieve uncountable objectives without tears. Management information system inevitably become bound up in organizational politics because the influence access to a key resource – namely information. Economics and management information can effect who does what to whom, when, where and how in an organization. In great institutions like the Institute of Management and Technology (IMT) Enugu a case study many new information systems require changes in personnel, individual routines that can be painful for those involved and require retraining and additional effort may or may not be compensated. In a nut shell, because management information system potentially change an organization’s structure, culture, business processes, and strategy, there is often considerable resistance to them when they are introduced. The case study have many schools, departments, divisions and units which needs research on economics and management information systems. A system can be defined as a set of interrelated components and / or elements, which reacts with input to produce output. A department in an organization is a system. The researcher is faced to itemize the practical challenges encountered and solution adopted by the Institute Management and Enugu state government.Keywords: economics, information, management, productivity, regulations
Procedia PDF Downloads 381