Search results for: hybrid extragradient method
18929 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach
Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani
Abstract:
This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.Keywords: chaotic approach, phase space, Cao method, local linear approximation method
Procedia PDF Downloads 33218928 Integrating Inference, Simulation and Deduction in Molecular Domain Analysis and Synthesis with Peculiar Attention to Drug Discovery
Authors: Diego Liberati
Abstract:
Standard molecular modeling is traditionally done through Schroedinger equations via the help of powerful tools helping to manage them atom by atom, often needing High Performance Computing. Here, a full portfolio of new tools, conjugating statistical inference in the so called eXplainable Artificial Intelligence framework (in the form of Machine Learning of understandable rules) to the more traditional modeling and simulation control theory of mixed dynamic logic hybrid processes, is offered as quite a general purpose even if making an example to a popular chemical physics set of problems.Keywords: understandable rules ML, k-means, PCA, PieceWise Affine Auto Regression with eXogenous input
Procedia PDF Downloads 3218927 Tumor Detection of Cerebral MRI by Multifractal Analysis
Authors: S. Oudjemia, F. Alim, S. Seddiki
Abstract:
This paper shows the application of multifractal analysis for additional help in cancer diagnosis. The medical image processing is a very important discipline in which many existing methods are in search of solutions to real problems of medicine. In this work, we present results of multifractal analysis of brain MRI images. The purpose of this analysis was to separate between healthy and cancerous tissue of the brain. A nonlinear method based on multifractal detrending moving average (MFDMA) which is a generalization of the detrending fluctuations analysis (DFA) is used for the detection of abnormalities in these images. The proposed method could make separation of the two types of brain tissue with success. It is very important to note that the choice of this non-linear method is due to the complexity and irregularity of tumor tissue that linear and classical nonlinear methods seem difficult to characterize completely. In order to show the performance of this method, we compared its results with those of the conventional method box-counting.Keywords: irregularity, nonlinearity, MRI brain images, multifractal analysis, brain tumor
Procedia PDF Downloads 44318926 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds
Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang
Abstract:
Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision
Procedia PDF Downloads 16118925 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry
Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine
Abstract:
The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).Keywords: bottom elevation, MVS, river, SfM
Procedia PDF Downloads 30018924 Spectrophotometric Determination of Phenylephrine Hydrochloride by Coupling with Diazotized 2,4-Dinitroaniline
Authors: Sulaiman Gafar Muhamad
Abstract:
A rapid spectrophotometric method for the micro-determination of phenylephrine-HCl (PHE) has been developed. The proposed method involves the coupling of phenylephrine-HCl with diazotized 2,4-dinitroaniline in alkaline medium at λmax 455 nm. Under the present optimum condition, Beer’s law was obeyed in the range of 1.0-20 μg/ml of PHE with molar absorptivity of 1.915 ×104 l. mol-1.cm-1, with a relative error of 0.015 and a relative standard deviation of 0.024%. The current method has been applied successfully to estimate phenylephrine-HCl in pharmaceutical preparations (nose drop and syrup).Keywords: diazo-coupling, 2, 4-dinitroaniline, phenylephrine-HCl, spectrophotometry
Procedia PDF Downloads 25818923 Rational Probabilistic Method for Calculating Thermal Cracking Risk of Mass Concrete Structures
Authors: Naoyuki Sugihashi, Toshiharu Kishi
Abstract:
The probability of occurrence of thermal cracks in mass concrete in Japan is evaluated by the cracking probability diagram that represents the relationship between the thermal cracking index and the probability of occurrence of cracks in the actual structure. In this paper, we propose a method to directly calculate the cracking probability, following a probabilistic theory by modeling the variance of tensile stress and tensile strength. In this method, the relationship between the variance of tensile stress and tensile strength, the thermal cracking index, and the cracking probability are formulated and presented. In addition, standard deviation of tensile stress and tensile strength was identified, and the method of calculating cracking probability in a general construction controlled environment was also demonstrated.Keywords: thermal crack control, mass concrete, thermal cracking probability, durability of concrete, calculating method of cracking probability
Procedia PDF Downloads 34818922 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper
Authors: Nicolas Bae, Theodore L. Karavasilis
Abstract:
Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design
Procedia PDF Downloads 19318921 A New Family of Integration Methods for Nonlinear Dynamic Analysis
Authors: Shuenn-Yih Chang, Chiu-LI Huang, Ngoc-Cuong Tran
Abstract:
A new family of structure-dependent integration methods, whose coefficients of the difference equation for displacement increment are functions of the initial structural properties and the step size for time integration, is proposed in this work. This family method can simultaneously integrate the controllable numerical dissipation, explicit formulation and unconditional stability together. In general, its numerical dissipation can be continuously controlled by a parameter and it is possible to achieve zero damping. In addition, it can have high-frequency damping to suppress or even remove the spurious oscillations high frequency modes. Whereas, the low frequency modes can be very accurately integrated due to the almost zero damping for these low frequency modes. It is shown herein that the proposed family method can have exactly the same numerical properties as those of HHT-α method for linear elastic systems. In addition, it still preserves the most important property of a structure-dependent integration method, which is an explicit formulation for each time step. Consequently, it can save a huge computational efforts in solving inertial problems when compared to the HHT-α method. In fact, it is revealed by numerical experiments that the CPU time consumed by the proposed family method is only about 1.6% of that consumed by the HHT-α method for the 125-DOF system while it reduces to be 0.16% for the 1000-DOF system. Apparently, the saving of computational efforts is very significant.Keywords: structure-dependent integration method, nonlinear dynamic analysis, unconditional stability, numerical dissipation, accuracy
Procedia PDF Downloads 64118920 On Periodic Integer-Valued Moving Average Models
Authors: Aries Nawel, Bentarzi Mohamed
Abstract:
This paper deals with the study of some probabilistic and statistical properties of a Periodic Integer-Valued Moving Average Model (PINMA_{S}(q)). The closed forms of the mean, the second moment and the periodic autocovariance function are obtained. Furthermore, the time reversibility of the model is discussed in details. Moreover, the estimation of the underlying parameters are obtained by the Yule-Walker method, the Conditional Least Square method (CLS) and the Weighted Conditional Least Square method (WCLS). A simulation study is carried out to evaluate the performance of the estimation method. Moreover, an application on real data set is provided.Keywords: periodic integer-valued moving average, periodically correlated process, time reversibility, count data
Procedia PDF Downloads 20318919 A Physical Treatment Method as a Prevention Method for Barium Sulfate Scaling
Authors: M. A. Salman, G. Al-Nuwaibit, M. Safar, M. Rughaibi, A. Al-Mesri
Abstract:
Barium sulfate (BaSO₄) is a hard scaling usually precipitates on the surface of equipment in many industrial systems, as oil and gas production, desalination and cooling and boiler operation. It is a scale that extremely resistance to both chemical and mechanical cleaning. So, BaSO₄ is a problematic and expensive scaling. Although barium ions are present in most natural waters at a very low concentration as low as 0.008 mg/l, it could result of scaling problems in the presence of high concentration of sulfate ion or when mixing with incompatible waters as in oil produced water. The scaling potential of BaSO₄ using seawater at the intake of seven desalination plants in Kuwait, brine water and Kuwait oil produced water was calculated and compared then the best location in regards of barium sulfate scaling was reported. Finally, a physical treatment method (magnetic treatment method) and chemical treatment method were used to control BaSO₄ scaling using saturated solutions at different operating temperatures, flow velocities, feed pHs and different magnetic strengths. The results of the two methods were discussed, and the more economical one with the reasonable performance was recommended, which is the physical treatment method.Keywords: magnetic field strength, flow velocity, retention time, barium sulfate
Procedia PDF Downloads 26818918 Sound Absorbing and Thermal Insulating Properties of Natural Fibers (Coir/Jute) Hybrid Composite Materials for Automotive Textiles
Authors: Robel Legese Meko
Abstract:
Natural fibers have been used as end-of-life textiles and made into textile products which have become a well-proven and effective way of processing. Nowadays, resources to make primary synthetic fibers are becoming less and less as the world population is rising. Hence it is necessary to develop processes to fabricate textiles that are easily converted to composite materials. Acoustic comfort is closely related to the concept of sound absorption and includes protection against noise. This research paper presents an experimental study on sound absorption coefficients, for natural fiber composite materials: a natural fiber (Coir/Jute) with different blend proportions of raw materials mixed with rigid polyurethane foam as a binder. The natural fiber composite materials were characterized both acoustically (sound absorption coefficient SAC) and also in terms of heat transfer (thermal conductivity). The acoustic absorption coefficient was determined using the impedance tube method according to the ASTM Standard (ASTM E 1050). The influence of the structure of these materials on the sound-absorbing properties was analyzed. The experimental results signify that the porous natural coir/jute composites possess excellent performance in the absorption of high-frequency sound waves, especially above 2000 Hz, and didn’t induce a significant change in the thermal conductivity of the composites. Thus, the sound absorption performances of natural fiber composites based on coir/jute fiber materials promote environmentally friendly solutions.Keywords: coir/jute fiber, sound absorption coefficients, compression molding, impedance tube, thermal insulating properties, SEM analysis
Procedia PDF Downloads 11318917 Estimation and Validation of Free Lime Analysis of Clinker by Quantitative Phase Analysis Using X ray diffraction
Authors: Suresh Palla, Kalpna Sharma, Gaurav Bhatnagar, S. K. Chaturvedi, B. N. Mohapatra
Abstract:
Determining the content of free lime is especially important to judge reactivity of the raw materials and clinker quality. The free lime limit isn’t the same for all cements; it depends on several factors, especially the temperature reached during the cooking and the grain size distribution in cement after grinding. Estimation of free lime by conventional method is influenced by the presence of portlandite and misleads the actual free lime content in the clinker for quality check up conditions. To ensure the product quality according to the standard specifications in terms of within the quality limits or not, a reliable, precise, and very reproducible method to quantify the relative phase abundances in the Portland Cement clinker and Portland Cements is to use X-ray diffraction (XRD) in combination with the Rietveld method. In the present study, a methodology was proposed using XRD to validate the obtained results of free lime by conventional method. The XRD and TG/DTA results confirm the presence of portlandite in the clinker to take the decision on the obtained free lime results through conventional method.Keywords: free lime, quantitative phase analysis, conventional method, x ray diffraction
Procedia PDF Downloads 13818916 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature
Authors: Chin-Yun Chen
Abstract:
Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.Keywords: numerical quadrature, error estimation, derivative free method, interval computation
Procedia PDF Downloads 46418915 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution
Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang
Abstract:
Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution
Procedia PDF Downloads 33218914 Experimental Investigation of Performance Anode Side of PEM Fuel Cell with Spin Method Coated with YSZ+SDC
Authors: Gürol Önal, Kevser Dinçer, Salih Yayla
Abstract:
In this study, performance of proton exchange membrane PEM fuel cell was experimentally investigated. Coating on the anode side of the PEM fuel cell was accomplished with the spin method by using YSZ+SDC. A solution having 0,1 gr YttriaStabilized Zirconia (YSZ) + 0,1 Samarium-Doped Ceria (SDC) + 10 mL methanol was prepared. This solution was taken out and filled into a micro-pipette. Then the anode side of PEM fuel cell was coated with YSZ+ SDC by using spin method. In the experimental study, current, voltage and power performances before and after coating were recorded and then compared to each other. It was found that the efficiency of PEM fuel cell increases after the coating with YSZ+SDC.Keywords: fuel cell, Polymer Electrolyte Membrane (PEM), membrane, spin method
Procedia PDF Downloads 56218913 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features
Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim
Abstract:
The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.Keywords: data mining, Korean linguistic feature, literary fiction, relationship extraction
Procedia PDF Downloads 38318912 Analysis of CO₂ Capture Products from Carbon Capture and Utilization Plant
Authors: Bongjae Lee, Beom Goo Hwang, Hye Mi Park
Abstract:
CO₂ capture products manufactured through Carbon Capture and Utilization (CCU) Plant that collect CO₂ directly from power plants require accurate measurements of the amount of CO₂ captured. For this purpose, two tests were carried out on the weight loss test. And one was analyzed using a carbon dioxide quantification device. First, the ignition loss analysis was performed by measuring the weight of the sample at 550°C after the first conversation and then confirming the loss when ignited at 950°C. Second, in the thermogravimetric analysis, the sample was divided into two sections of 40 to 500°C and 500 to 800°C to confirm the reduction. The results of thermal weight loss analysis and thermogravimetric analysis were confirmed to be almost similar. However, the temperature of the ignition loss analysis method was 950°C, which was 150°C higher than that of the thermogravimetric method at a temperature of 800°C, so that the difference in the amount of weight loss was 3 to 4% higher by the heat loss analysis method. In addition, the tendency that the CO₂ content increases as the reaction time become longer is similarly confirmed. Third, the results of the wet titration method through the carbon dioxide quantification device were found to be significantly lower than the weight loss method. Therefore, based on the results obtained through the above three analysis methods, we will establish a method to analyze the accurate amount of CO₂. Acknowledgements: This work was supported by the Korea Institute of Energy Technology Evaluation and planning (No. 20152010201850).Keywords: carbon capture and utilization, CCU, CO2, CO2 capture products, analysis method
Procedia PDF Downloads 21818911 The Proposal of Modification of California Pipe Method for Inclined Pipe
Authors: Wojciech Dąbrowski, Joanna Bąk, Laurent Solliec
Abstract:
Nowadays technical and technological progress and constant development of methods and devices applied to sanitary engineering is indispensable. Issues related to sanitary engineering involve flow measurements for water and wastewater. The precise measurement is very important and pivotal for further actions, like monitoring. There are many methods and techniques of flow measurement in the area of sanitary engineering. Weirs and flumes are well–known methods and common used. But also there are alternative methods. Some of them are very simple methods, others are solutions using high technique. The old–time method combined with new technique could be more useful than earlier. Paper describes substitute method of flow gauging (California pipe method) and proposal of modification of this method used for inclined pipe. Examination of possibility of improving and developing old–time methods is direction of the investigation.Keywords: California pipe, sewerage, flow rate measurement, water, wastewater, improve, modification, hydraulic monitoring, stream
Procedia PDF Downloads 43818910 Modeling and Tracking of Deformable Structures in Medical Images
Authors: Said Ettaieb, Kamel Hamrouni, Su Ruan
Abstract:
This paper presents a new method based both on Active Shape Model and a priori knowledge about the spatio-temporal shape variation for tracking deformable structures in medical imaging. The main idea is to exploit the a priori knowledge of shape that exists in ASM and introduce new knowledge about the shape variation over time. The aim is to define a new more stable method, allowing the reliable detection of structures whose shape changes considerably in time. This method can also be used for the three-dimensional segmentation by replacing the temporal component by the third spatial axis (z). The proposed method is applied for the functional and morphological study of the heart pump. The functional aspect was studied through temporal sequences of scintigraphic images and morphology was studied through MRI volumes. The obtained results are encouraging and show the performance of the proposed method.Keywords: active shape model, a priori knowledge, spatiotemporal shape variation, deformable structures, medical images
Procedia PDF Downloads 34318909 Comparison of Pbs/Zns Quantum Dots Synthesis Methods
Authors: Mahbobeh Bozhmehrani, Afshin Farah Bakhsh
Abstract:
Nanoparticles with PbS core of 12 nm and shell of approximately 3 nm were synthesized at PbS:ZnS ratios of 1.01:0.1 using Merca Ptopropionic Acid as stabilizing agent. PbS/ZnS nanoparticles present a dramatically increase of Photoluminescence intensity, confirming the confinement of the PbS core by increasing the Quantum Yield from 0.63 to 0.92 by the addition of the ZnS shell. In this case, the synthesis by microwave method allows obtaining nanoparticles with enhanced optical characteristics than those of nanoparticles synthesized by colloidal method.Keywords: Pbs/Zns, quantum dots, colloidal method, microwave
Procedia PDF Downloads 28718908 The Impact of Training Method on Programming Learning Performance
Authors: Chechen Liao, Chin Yi Yang
Abstract:
Although several factors that affect learning to program have been identified over the years, there continues to be no indication of any consensus in understanding why some students learn to program easily and quickly while others have difficulty. Seldom have researchers considered the problem of how to help the students enhance the programming learning outcome. The research had been conducted at a high school in Taiwan. Students participating in the study consist of 330 tenth grade students enrolled in the Basic Computer Concepts course with the same instructor. Two types of training methods-instruction-oriented and exploration-oriented were conducted. The result of this research shows that the instruction-oriented training method has better learning performance than exploration-oriented training method.Keywords: learning performance, programming learning, TDD, training method
Procedia PDF Downloads 42818907 Approximate Confidence Interval for Effect Size Base on Bootstrap Resampling Method
Authors: S. Phanyaem
Abstract:
This paper presents the confidence intervals for the effect size base on bootstrap resampling method. The meta-analytic confidence interval for effect size is proposed that are easy to compute. A Monte Carlo simulation study was conducted to compare the performance of the proposed confidence intervals with the existing confidence intervals. The best confidence interval method will have a coverage probability close to 0.95. Simulation results have shown that our proposed confidence intervals perform well in terms of coverage probability and expected length.Keywords: effect size, confidence interval, bootstrap method, resampling
Procedia PDF Downloads 59618906 A Clustering-Sequencing Approach to the Facility Layout Problem
Authors: Saeideh Salimpour, Sophie-Charlotte Viaux, Ahmed Azab, Mohammed Fazle Baki
Abstract:
The Facility Layout Problem (FLP) is key to the efficient and cost-effective operation of a system. This paper presents a hybrid heuristic- and mathematical-programming-based approach that divides the problem conceptually into those of clustering and sequencing. First, clusters of vertically aligned facilities are formed, which are later on sequenced horizontally. The developed methodology provides promising results in comparison to its counterparts in the literature by minimizing the inter-distances for facilities which have more interactions amongst each other and aims at placing the facilities with more interactions at the centroid of the shop.Keywords: clustering-sequencing approach, mathematical modeling, optimization, unequal facility layout problem
Procedia PDF Downloads 33318905 A Quick Method for Seismic Vulnerability Evaluation of Offshore Structures by Static and Dynamic Nonlinear Analyses
Authors: Somayyeh Karimiyan
Abstract:
To evaluate the seismic vulnerability of vital offshore structures with the highest possible precision, Nonlinear Time History Analyses (NLTHA), is the most reliable method. However, since it is very time-consuming, a quick procedure is greatly desired. This paper presents a quick method by combining the Push Over Analysis (POA) and the NLTHA. The POA is preformed first to recognize the more critical members, and then the NLTHA is performed to evaluate more precisely the critical members’ vulnerability. The proposed method has been applied to jacket type structure. Results show that combining POA and NLTHA is a reliable seismic evaluation method, and also that none of the earthquake characteristics alone, can be a dominant factor in vulnerability evaluation.Keywords: jacket structure, seismic evaluation, push-over and nonlinear time history analyses, critical members
Procedia PDF Downloads 28118904 Application of Residual Correction Method on Hyperbolic Thermoelastic Response of Hollow Spherical Medium in Rapid Transient Heat Conduction
Authors: Po-Jen Su, Huann-Ming Chou
Abstract:
In this article we uses the residual correction method to deal with transient thermoelastic problems with a hollow spherical region when the continuum medium possesses spherically isotropic thermoelastic properties. Based on linear thermoelastic theory, the equations of hyperbolic heat conduction and thermoelastic motion were combined to establish the thermoelastic dynamic model with consideration of the deformation acceleration effect and non-Fourier effect under the condition of transient thermal shock. The approximate solutions of temperature and displacement distributions are obtained using the residual correction method based on the maximum principle in combination with the finite difference method, making it easier and faster to obtain upper and lower approximations of exact solutions. The proposed method is found to be an effective numerical method with satisfactory accuracy. Moreover, the result shows that the effect of transient thermal shock induced by deformation acceleration is enhanced by non-Fourier heat conduction with increased peak stress. The influence on the stress increases with the thermal relaxation time.Keywords: maximum principle, non-Fourier heat conduction, residual correction method, thermo-elastic response
Procedia PDF Downloads 42718903 Novel Technique for calculating Surface Potential Gradient of Overhead Line Conductors
Authors: Sudip Sudhir Godbole
Abstract:
In transmission line surface potential gradient is a critical design parameter for planning overhead line, as it determines the level of corona loss (CL), radio interference (RI) and audible noise (AN).With increase of transmission line voltage level bulk power transfer is possible, using bundle conductor configuration used, it is more complex to find accurate surface stress in bundle configuration. The majority of existing models for surface gradient calculations are based on analytical methods which restrict their application in simulating complex surface geometry. This paper proposes a novel technique which utilizes both analytical and numerical procedure to predict the surface gradient. One of 400 kV transmission line configurations has been selected as an example to compare the results for different methods. The different strand shapes are a key variable in determining.Keywords: surface gradient, Maxwell potential coefficient method, market and Mengele’s method, successive images method, charge simulation method, finite element method
Procedia PDF Downloads 53818902 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 4418901 A Fresh Approach to Learn Evidence-Based Practice, a Prospective Interventional Study
Authors: Ebtehal Qulisy, Geoffrey Dougherty, Kholoud Hothan, Mylene Dandavino
Abstract:
Background: For more than 200 years, journal clubs (JCs) have been used to teach the fundamentals of critical appraisal and evidence-based practice (EBP). However, JCs curricula face important challenges, including poor sustainability, insufficient time to prepare for and conduct the activities, and lack of trainee skills and self-efficacy with critical appraisal. Andragogy principles and modern technology could help EBP be taught in more relevant, modern, and interactive ways. Method: We propose a fresh educational activity to teach EBP. Educational sessions are designed to encourage collaborative and experiential learning and do not require advanced preparation by the participants. Each session lasts 60 minutes and is adaptable to in-person, virtual, or hybrid contexts. Sessions are structured around a worksheet and include three educational objectives: “1. Identify a Clinical Conundrum”, “2. Compare and Contrast Current Guidelines”, and “3. Choose a Recent Journal Article”. Sessions begin with a short presentation by a facilitator of a clinical scenario highlighting a “grey-zone” in pediatrics. Trainees are placed in groups of two to four (based on the participants’ number) of varied training levels. The first task requires the identification of a clinical conundrum (a situation where there is no clear answer but only a reasonable solution) related to the scenario. For the second task, trainees must identify two or three clinical guidelines. The last task requires trainees to find a journal article published in the last year that reports an update regarding the scenario’s topic. Participants are allowed to use their electronic devices throughout the session. Our university provides full-text access to major journals, which facilitated this exercise. Results: Participants were a convenience sample of trainees in the inpatient services at the Montréal Children’s Hospital, McGill University. Sessions were conducted as a part of an existing weekly academic activity and facilitated by pediatricians with experience in critical appraisal. There were 28 participants in 4 sessions held during Spring 2022. Time was allocated at the end of each session to collect participants’ feedback via a self-administered online survey. There were 22 responses, were 41%(n=9) pediatric residents, 22.7%(n=5) family medicine residents, 31.8%(n=7) medical students, and 4.5%(n=1) nurse practitioner. Four respondents participated in more than one session. The “Satisfied” rates were 94.7% for session format, 100% for topic selection, 89.5% for time allocation, and 84.3% for worksheet structure. 60% of participants felt that including the sessions during the clinical ward rotation was “Feasible.” As per self-efficacy, participants reported being “Confident” for the tasks as follows: 89.5% for the ability to identify a relevant conundrum, 94.8% for the compare and contrast task, and 84.2% for the identification of a published update. The perceived effectiveness to learn EBP was reported as “Agreed” by all participants. All participants would recommend this session for further teaching. Conclusion: We developed a modern approach to teach EBP, enjoyed by all levels of participants, who also felt it was a useful learning experience. Our approach addresses known JCs challenges by being relevant to clinical care, fostering active engagement but not requiring any preparation, using available technology, and being adaptable to hybrid contexts.Keywords: medical education, journal clubs, post-graduate teaching, andragogy, experiential learning, evidence-based practice
Procedia PDF Downloads 11618900 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 111