Search results for: Squared Error (SE) loss function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9696

Search results for: Squared Error (SE) loss function

8226 Effect of Nickel Coating on Corrosion of Alloys in Molten Salts

Authors: Divya Raghunandanan, Bhavesh D. Gajbhiye, C. S. Sona, Channamallikarjun S. Mathpati

Abstract:

Molten fluoride salts are considered as potential coolants for next generation nuclear plants where the heat can be utilized for production of hydrogen and electricity. Among molten fluoride salts, FLiNaK (LiF-NaF-KF: 46.5-11.5-42 mol %) is a potential candidate for the coolant due to its superior thermophysical properties such as high temperature stability, boiling point, volumetric heat capacity and thermal conductivity. Major technical challenge in implementation is the selection of structural material which can withstand corrosive nature of FLiNaK. Corrosion study of alloys SS 316L, Hastelloy B, Ni-201 was performed in molten FLiNaK at 650°C. Nickel was found to be more resistant to corrosive attack in molten fluoride medium. Corrosion experiments were performed to study the effect of nickel coating on corrosion of alloys SS 316L and Hastelloy B. Weight loss of the alloys due to corrosion was measured and corrosion rate was estimated. The surface morphology of the alloys was analyzed by Scanning Electron Microscopy.

Keywords: corrosion, FLiNaK, hastelloy, weight loss

Procedia PDF Downloads 437
8225 Performing Fat Activism in Australia: An Autoethnographic Exploration

Authors: Jenny Lee

Abstract:

Fat Studies is emerging as an interdisciplinary area of study, intersecting with Gender Studies, Sociology, Human Development and the Creative Arts. A focus on weight loss, and, therefore, fat hatred, has resulted in a form of discriminatory institutional practice that impacts women in the Western world. This focus is sanctioned by a large dieting industry, medical associations, the media, and at times, government initiatives. This paper will discuss the emergence of the so-called ‘Obesity Epidemic’ in Australia and the Western world and the stereotypes that thin equals healthy and fat equals unhealthy. This paper will argue that, for those with a health focus, ‘Health at every size’ is a more effective principle, which involves striving for healthy living, without a focus on weight loss. This discussion will contextualise an autoethnographic exploration of how fat acceptance and Health at Every Size can be encouraged through fat activism and fat political art. As part of this paper, a selection of the recent performance, writing and art in Australia will be presented, including Aquaporko, the fat femme synchronised swim team and VaVaBoomBah, the Melbourne fat burlesque performances.

Keywords: activism, fat, health, obesity, performance

Procedia PDF Downloads 180
8224 Utilization of Process Mapping Tool to Enhance Production Drilling in Underground Metal Mining Operations

Authors: Sidharth Talan, Sanjay Kumar Sharma, Eoin Joseph Wallace, Nikita Agrawal

Abstract:

Underground mining is at the core of rapidly evolving metals and minerals sector due to the increasing mineral consumption globally. Even though the surface mines are still more abundant on earth, the scales of industry are slowly tipping towards underground mining due to rising depth and complexities of orebodies. Thus, the efficient and productive functioning of underground operations depends significantly on the synchronized performance of key elements such as operating site, mining equipment, manpower and mine services. Production drilling is the process of conducting long hole drilling for the purpose of charging and blasting these holes for the production of ore in underground metal mines. Thus, production drilling is the crucial segment in the underground metal mining value chain. This paper presents the process mapping tool to evaluate the production drilling process in the underground metal mining operation by dividing the given process into three segments namely Input, Process and Output. The three segments are further segregated into factors and sub-factors. As per the study, the major input factors crucial for the efficient functioning of production drilling process are power, drilling water, geotechnical support of the drilling site, skilled drilling operators, services installation crew, oils and drill accessories for drilling machine, survey markings at drill site, proper housekeeping, regular maintenance of drill machine, suitable transportation for reaching the drilling site and finally proper ventilation. The major outputs for the production drilling process are ore, waste as a result of dilution, timely reporting and investigation of unsafe practices, optimized process time and finally well fragmented blasted material within specifications set by the mining company. The paper also exhibits the drilling loss matrix, which is utilized to appraise the loss in planned production meters per day in a mine on account of availability loss in the machine due to breakdowns, underutilization of the machine and productivity loss in the machine measured in drilling meters per unit of percussion hour with respect to its planned productivity for the day. The given three losses would be essential to detect the bottlenecks in the process map of production drilling operation so as to instigate the action plan to suppress or prevent the causes leading to the operational performance deficiency. The given tool is beneficial to mine management to focus on the critical factors negatively impacting the production drilling operation and design necessary operational and maintenance strategies to mitigate them. 

Keywords: process map, drilling loss matrix, SIPOC, productivity, percussion rate

Procedia PDF Downloads 211
8223 Haplotypes of the Human Leukocyte Antigen-G Different HIV-1 Groups from the Netherlands

Authors: A. Alyami, S. Christmas, K. Neeltje, G. Pollakis, B. Paxton, Z. Al-Bayati

Abstract:

The Human leukocyte antigen-G (HLA-G) molecule plays an important role in immunomodulation. To date, 16 untranslated regions (UTR) HLA-G haplotypes have been previously defined by sequenced SNPs in the coding region. From these, UTR-1, UTR-2, UTR-3, UTR-4, UTR-5, UTR-6 and UTR-7 are the most frequent 3’UTR haplotypes at the global level. UTR-1 is associated with higher levels of soluble HLA-G and HLA-G expression, whereas UTR-5 and UTR-7 are linked with low levels of soluble HLA-G and HLA-G expression. Human immunodeficiency virus type 1 (HIV-1) infection results in the progressive loss of immune function in infected individuals. The virus escape mechanism typically includes T lymphocytes and NK cell recognition and lyses by classical HLA-A and B down-regulation, which has been associated with non-classical HLA-G molecule up-regulation, respectively. We evaluated the haplotypes of the HLA-G 3′ untranslated region frequencies observed in three HIV-1 groups from the Netherlands and their susceptibility to develop infection. The three groups are made up of mainly men who have sex with men (MSM), injection drug users (IDU) and a high-risk-seronegative (HRSN) group. DNA samples were amplified with published primers prior sequencing. According to our results, the low expresser frequencies show higher in HRSN compared to other groups. This is indicating that 3’UTR polymorphisms may be identified as potential prognostic biomarkers to determine susceptibility to HIV.

Keywords: Human leukocyte antigen-G (HLA-G) , men who have sex with men (MSM), injection drug users (IDU), high-risk-seronegative (HRSN) group, high-untranslated region (UTR)

Procedia PDF Downloads 152
8222 Embedded System of Signal Processing on FPGA: Underwater Application Architecture

Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad

Abstract:

The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.

Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing

Procedia PDF Downloads 73
8221 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 31
8220 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 224
8219 Assessing Economic Losses Of 2104 Flood Disaster: A Case Study on Dabong, Kelantan, Malaysia

Authors: Ahmad Hamidi Mohamed, Jamaluddin Othman, Mashitah Suid, Mohd Zaim Mohd Shukri

Abstract:

Floods are considered an annual natural disaster in Kelantan. However, the record-setting flood of 2014 was a 'tsunami-like disaster'. A study has been conducted with the objectives to assess the economic impact of the flood to the resident of Dabong area in Kelantan Darul Naim, Malaysia. This area was selected due to the severity during the flood. The impacts of flood on local people were done by conducting structured interviews with the use of questionnaires. The questionnaire was intended to acquire information on losses faced by Dabong residence. Questionnaires covered various areas of inconveniences suffered with respect to health effects, including illnesses suffered, their intensities, duration and their associated costs. Loss of productivity and quality of life was also assessed. Inquiries were made to Government agencies to obtain relevant statistical data regarding the loss due to the flood tragedy. The data collected by giving formal request to the governmental agencies and formal meetings were done. From the study a staggering amount of losses were calculated. This figure comes from losses of property, Farmers/Agriculture, Traders/Business, Health, Insurance and Governmental losses. Flood brings hardship to the people of Dabong and these losses of home will cause inconveniences to the society. The huge amount of economic loss extracted from this study shows that federal and state government of Kelantan need to find out the cause of the major flood in 2014. Fast and effective measures have to be planned and implemented in flood prone area to prevent same tragedy happens in the future.

Keywords: economic impact, flood tragedy, Malaysia, property losses

Procedia PDF Downloads 262
8218 Image Segmentation Using Active Contours Based on Anisotropic Diffusion

Authors: Shafiullah Soomro

Abstract:

Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.

Keywords: active contours, anisotropic diffusion, level-set, partial differential equations

Procedia PDF Downloads 157
8217 Techniques to Characterize Subpopulations among Hearing Impaired Patients and Its Impact for Hearing Aid Fitting

Authors: Vijaya K. Narne, Gerard Loquet, Tobias Piechowiak, Dorte Hammershoi, Jesper H. Schmidt

Abstract:

BEAR, which stands for better hearing rehabilitation is a large-scale project in Denmark designed and executed by three national universities, three hospitals, and the hearing aid industry with the aim to improve hearing aid fitting. A total of 1963 hearing impaired people were included and were segmented into subgroups based on hearing-loss, demographics, audiological and questionnaires data (i.e., the speech, spatial and qualities of hearing scale [SSQ-12] and the International Outcome Inventory for Hearing-Aids [IOI-HA]). With the aim to provide a better hearing-aid fit to individual patients, we applied modern machine learning techniques with traditional audiograms rule-based systems. Results show that age, speech discrimination scores, and audiogram configurations were evolved as important parameters in characterizing sub-population from the data-set. The attempt to characterize sub-population reveal a clearer picture about the individual hearing difficulties encountered and the benefits derived from more individualized hearing aids.

Keywords: hearing loss, audiological data, machine learning, hearing aids

Procedia PDF Downloads 149
8216 Theoretical Investigation of the Singlet and Triplet Electronic States of ⁹⁰ZrS Molecules

Authors: Makhlouf Sandy, Adem Ziad, Taher Fadia, Magnier Sylvie

Abstract:

The electronic structure of 90ZrS has been investigated using Ab-initio methods based on Complete Active Space Self Consistent Field and Multi-reference Configuration Interaction (CASSCF/MRCI). The number of predicted states has been extended to 14 singlet and 12 triplet lowest-lying states situated below 36000cm-1. The equilibrium energies of these 26 lowest-lying electronic states have been calculated in the 2S+1Λ(±) representation. The potential energy curves have been plotted in function of the inter-nuclear distances in a range of 1.5 to 4.5Å. Spectroscopic constants, permanent electric dipole moments and transition dipole moments between the different electronic states have also been determined. A discrepancy error of utmost 5% for the majority of values shows a good agreement with available experimental data. The ground state is found to be of symmetry X1Σ+ with an equilibrium inter-nuclear distance Re= 2.16Å. However, the (1)3Δ is the closest state to X1Σ+ and is situated at 514 cm-1. To the best of our knowledge, this is the first time that the spin-orbit coupling has been investigated for all the predicted states of ZrS. 52 electronic components in the Ω(±) representation have been predicted. The energies of these components, the spectroscopic constants ωe, ωeχe, βe and the equilibrium inter-nuclear distances have been also obtained. The percentage composition of the Ω state wave-functions in terms of S-Λ states was calculated to identify their corresponding main parents. These (SOC) calculations have determined the shift between (1)3Δ1 and X1Σ+ states and confirmed the ground state type being 1Σ+.

Keywords: CASSCF/MRCI, electronic structure, spin-orbit effect, zirconium monosulfide

Procedia PDF Downloads 163
8215 Measuring the Height of a Person in Closed Circuit Television Video Footage Using 3D Human Body Model

Authors: Dojoon Jung, Kiwoong Moon, Joong Lee

Abstract:

The height of criminals is one of the important clues that can determine the scope of the suspect's search or exclude the suspect from the search target. Although measuring the height of criminals by video alone is limited by various reasons, the 3D data of the scene and the Closed Circuit Television (CCTV) footage are matched, the height of the criminal can be measured. However, it is still difficult to measure the height of CCTV footage in the non-contact type measurement method because of variables such as position, posture, and head shape of criminals. In this paper, we propose a method of matching the CCTV footage with the 3D data on the crime scene and measuring the height of the person using the 3D human body model in the matched data. In the proposed method, the height is measured by using 3D human model in various scenes of the person in the CCTV footage, and the measurement value of the target person is corrected by the measurement error of the replay CCTV footage of the reference person. We tested for 20 people's walking CCTV footage captured from an indoor and an outdoor and corrected the measurement values with 5 reference persons. Experimental results show that the measurement error (true value-measured value) average is 0.45 cm, and this method is effective for the measurement of the person's height in CCTV footage.

Keywords: human height, CCTV footage, 2D/3D matching, 3D human body model

Procedia PDF Downloads 245
8214 Small Text Extraction from Documents and Chart Images

Authors: Rominkumar Busa, Shahira K. C., Lijiya A.

Abstract:

Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images.

Keywords: small text extraction, OCR, scene text recognition, CRNN

Procedia PDF Downloads 120
8213 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum

Authors: Abdulrahman Sumayli, Saad M. AlShahrani

Abstract:

For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectively

Keywords: temperature, pressure variations, machine learning, oil treatment

Procedia PDF Downloads 64
8212 Semantic Textual Similarity on Contracts: Exploring Multiple Negative Ranking Losses for Sentence Transformers

Authors: Yogendra Sisodia

Abstract:

Researchers are becoming more interested in extracting useful information from legal documents thanks to the development of large-scale language models in natural language processing (NLP), and deep learning has accelerated the creation of powerful text mining models. Legal fields like contracts benefit greatly from semantic text search since it makes it quick and easy to find related clauses. After collecting sentence embeddings, it is relatively simple to locate sentences with a comparable meaning throughout the entire legal corpus. The author of this research investigated two pre-trained language models for this task: MiniLM and Roberta, and further fine-tuned them on Legal Contracts. The author used Multiple Negative Ranking Loss for the creation of sentence transformers. The fine-tuned language models and sentence transformers showed promising results.

Keywords: legal contracts, multiple negative ranking loss, natural language inference, sentence transformers, semantic textual similarity

Procedia PDF Downloads 98
8211 Smart Oxygen Deprivation Mask: An Improved Design with Biometric Feedback

Authors: Kevin V. Bui, Richard A. Claytor, Elizabeth M. Priolo, Weihui Li

Abstract:

Oxygen deprivation masks operate through the use of restricting valves as a means to reduce respiratory flow where flow is inversely proportional to the resistance applied. This produces the same effect as higher altitudes where lower pressure leads to reduced respiratory flow. Both increased resistance with restricting valves and reduce the pressure of higher altitudes make breathing difficultier and force breathing muscles (diaphragm and intercostal muscles) working harder. The process exercises these muscles, improves their strength and results in overall better breathing efficiency. Currently, these oxygen deprivation masks are purely mechanical devices without any electronic sensor to monitor the breathing condition, thus not be able to provide feedback on the breathing effort nor to evaluate the lung function. That is part of the reason that these masks are mainly used for high-level athletes to mimic training in higher altitude conditions, not suitable for patients or customers. The design aims to improve the current method of oxygen deprivation mask to include a larger scope of patients and customers while providing quantitative biometric data that the current design lacks. This will be accomplished by integrating sensors into the mask’s breathing valves along with data acquisition and Bluetooth modules for signal processing and transmission. Early stages of the sensor mask will measure breathing rate as a function of changing the air pressure in the mask, with later iterations providing feedback on flow rate. Data regarding breathing rate will be prudent in determining whether training or therapy is improving breathing function and quantify this improvement.

Keywords: oxygen deprivation mask, lung function, spirometer, Bluetooth

Procedia PDF Downloads 216
8210 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers

Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus

Abstract:

Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.

Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.

Procedia PDF Downloads 553
8209 Chemical Synthesis, Electrical and Antibacterial Properties of Polyaniline/Gold Nanocomposites

Authors: L. N. Shubha, M. Kalpana, P. Madhusudana Rao

Abstract:

Polyaniline/gold (PANI/Au) nanocomposite was prepared by in-situ chemical oxidation polymerization method. The synthesis involved the formation of polyaniline-gold nanocomposite, by in-situ redox reaction and the dispersion of gold nano particles throughout the polyaniline matrix. The nanocomposites were characterized by XRD, FTIR, TEM and UV-visible spectroscopy. The characteristic peaks in FTIR and UV-visible spectra confirmed the expected structure of polymer as reported in the literature. Further, transmission electron microscopy (TEM) confirmed the formation of gold nano particles. The crystallite size of 30 nm for nanoAu was supported by the XRD pattern. Further, the A.C. conductivity, dielectric constant (€’(w)) and dielectric loss (€’’(w)) of PANI/Au nano composite was measured using impedance analyzer. The effect of doping on the conductivity was investigated. The antibacterial activity was examined for this nano composite and it was observed that PANI/Au nanocomposite could be used as an antibacterial agent.

Keywords: AC-conductivity, anti-microbial activity, dielectric constant, dielectric loss, polyaniline/gold (PANI/AU) nanocomposite

Procedia PDF Downloads 376
8208 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 167
8207 Estimation of Soil Erosion Potential in Herat Province, Afghanistan

Authors: M. E. Razipoor, T. Masunaga, K. Sato, M. S. Saboory

Abstract:

Estimation of soil erosion is economically and environmentally important in Herat, Afghanistan. Degradation of soil has negative impact (decreased soil fertility, destroyed soil structure, and consequently soil sealing and crusting) on life of Herat residents. Water and wind are the main erosive factors causing soil erosion in Herat. Furthermore, scarce vegetation cover, exacerbated by socioeconomic constraint, and steep slopes accelerate soil erosion. To sustain soil productivity and reduce soil erosion impact on human life, due to sustaining agricultural production and auditing the environment, it is needed to quantify the magnitude and extent of soil erosion in a spatial domain. Thus, this study aims to estimate soil loss potential and its spatial distribution in Herat, Afghanistan by applying RUSLE in GIS environment. The rainfall erosivity factor ranged between values of 125 and 612 (MJ mm ha-1 h-1 year-1). Soil erodibility factor varied from 0.036 to 0.073 (Mg h MJ-1 mm-1). Slope length and steepness factor (LS) values were between 0.03 and 31.4. The vegetation cover factor (C), derived from NDVI analysis of Landsat-8 OLI scenes, resulting in range of 0.03 to 1. Support practice factor (P) were assigned to a value of 1, since there is not significant mitigation practices in the study area. Soil erosion potential map was the product of these factors. Mean soil erosion rate of Herat Province was 29 Mg ha-1 year-1 that ranged from 0.024 Mg ha-1 year-1 in flat areas with dense vegetation cover to 778 Mg ha-1 year-1 in sharp slopes with high rainfall but least vegetation cover. Based on land cover map of Afghanistan, areas with soil loss rate higher than soil loss tolerance (8 Mg ha-1 year-1) occupies 98% of Forests, 81% rangelands, 64% barren lands, 60% rainfed lands, 28% urban area and 18% irrigated Lands.

Keywords: Afghanistan, erosion, GIS, Herat, RUSLE

Procedia PDF Downloads 426
8206 Effects of Inlet Filtration Pressure Loss on Single and Two-Spool Gas Turbine

Authors: Enyia James Diwa, Dodeye Ina Igbong, Archibong Archibong Eso

Abstract:

Gas turbine operators have been faced with the dramatic financial setback resulting from compressor fouling. In a highly deregulated power industry where there is stiffness in the market competition, has made it imperative to improvise means of reducing maintenance cost in other to yield maximum profit. Compressor fouling results from the deposition of contaminants in the presence of oil and moisture on the compressor blade or annulus surfaces, which leads to a loss in flow capacity and compressor efficiency. These combined effects reduce power output, increase heat rate and cause creep life reduction. This paper also contains a model of two gas turbine engines via Cranfield University software known as TURBOMATCH, which is simulation software for detecting engine fouling rate. The model engines are of different configurations and capacities, and are operating in two different modes of constant output power and turbine inlet temperature for a two and three stage filter system. The idea is to investigate the more economically viable filtration systems by gas turbine users based on performance only. It has been demonstrated in the results that the two spool engine is a little more beneficial compared to the single spool. This is as a result of a higher pressure ratio of the two spools as well as the deceleration of the high-pressure compressor and high-pressure turbine speed in a constant TET. Meanwhile, the inlet filtration system was properly designed and balanced with a well-timed and economical compressor washing regime/scheme to control compressor fouling. The different technologies of inlet air filtration and compressor washing are considered and an attempt at optimization with respect to the cost of a combination of both control measures are made.

Keywords: inlet filtration, pressure loss, single spool, two spool

Procedia PDF Downloads 319
8205 Modeling of Strong Motion Generation Areas of the 2011 Tohoku, Japan Earthquake Using Modified Semi-Empirical Technique Incorporating Frequency Dependent Radiation Pattern Model

Authors: Sandeep, A. Joshi, Kamal, Piu Dhibar, Parveen Kumar

Abstract:

In the present work strong ground motion has been simulated using a modified semi-empirical technique (MSET), with frequency dependent radiation pattern model. Joshi et al. (2014) have modified the semi-empirical technique to incorporate the modeling of strong motion generation areas (SMGAs). A frequency dependent radiation pattern model is applied to simulate high frequency ground motion more precisely. Identified SMGAs (Kurahashi and Irikura 2012) of the 2011 Tohoku earthquake (Mw 9.0) were modeled using this modified technique. Records are simulated for both frequency dependent and constant radiation pattern function. Simulated records for both cases are compared with observed records in terms of peak ground acceleration and pseudo acceleration response spectra at different stations. Comparison of simulated and observed records in terms of root mean square error suggests that the method is capable of simulating record which matches in a wide frequency range for this earthquake and bears realistic appearance in terms of shape and strong motion parameters. The results confirm the efficacy and suitability of rupture model defined by five SMGAs for the developed modified technique.

Keywords: strong ground motion, semi-empirical, strong motion generation area, frequency dependent radiation pattern, 2011 Tohoku Earthquake

Procedia PDF Downloads 529
8204 Assessment of Post-surgical Donor-Site Morbidity in Vastus lateralis Free Flap for Head and Neck Reconstructive Surgery: An Observational Study

Authors: Ishith Seth, Lyndel Hewitt, Takako Yabe, James Wykes, Jonathan Clark, Bruce Ashford

Abstract:

Background: Vastus lateralis (VL) can be used to reconstruct defects of the head and neck. Whilst the advantages are documented, donor-site morbidity is not well described. This study aimed to assess donor-site morbidity after VL flap harvest. The results will determine future directions for preventative and post-operative care to improve patient health outcomes. Methods: Ten participants (mean age 55 years) were assessed for the presence of donor-site morbidity after VL harvest. Musculoskeletal (pain, muscle strength, muscle length, tactile sensation), quality of life (SF-12), and lower limb function (lower extremity function, gait (function and speed), sit to stand were assessed using validated and standardized procedures. Outcomes were compared to age-matched healthy reference values or the non-operative side. Analyses were conducted using descriptive statistics and non-parametric tests. Results: There was no difference in muscle strength (knee extension), muscle length, ability to sit-to-stand, or gait function (all P > 0.05). Knee flexor muscle strength was significantly less on the operated leg compared to the non-operated leg (P=0.02) and walking speed was slower than age-matched healthy values (P<0.001). Thigh tactile sensation was impaired in 89% of participants. Quality of life was significantly less for the physical health component of the SF-12 (P<0.001). The mental health component of the SF-12 was similar to healthy controls (P=0.26). Conclusion: There was no effect on donor site morbidity with regards to knee extensor strength, pain, walking function, ability to sit-to-stand, and muscle length. VL harvest affected donor-site knee flexion strength, walking speed, tactile sensation, and physical health-related quality of life.

Keywords: vastus lateralis, morbidity, head and neck, surgery, donor-site morbidity

Procedia PDF Downloads 238
8203 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company

Authors: Korpapa Srisamai, Pawee Siriruk

Abstract:

The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.

Keywords: demand forecast, reorder point, lost sale, dead stock

Procedia PDF Downloads 109
8202 The Use of Learning Management Systems during Emerging the Tacit Knowledge

Authors: Ercan Eker, Muhammer Karaman, Akif Aslan, Hakan Tanrikuluoglu

Abstract:

Deficiency of institutional memory and knowledge management can result in information security breaches, loss of prestige and trustworthiness and the worst the loss of know-how and institutional knowledge. Traditional learning management within organizations is generally handled by personal efforts. That kind of struggle mostly depends on personal desire, motivation and institutional belonging. Even if an organization has highly motivated employees at a certain time, the institutional knowledge and memory life cycle will generally remain limited to these employees’ spending time in this organization. Having a learning management system in an organization can sustain the institutional memory, knowledge and know-how in the organization. Learning management systems are much more needed especially in public organizations where the job rotation is frequently seen and managers are appointed periodically. However, a learning management system should not be seen as an organizations’ website. It is a more comprehensive, interactive and user-friendly knowledge management tool for organizations. In this study, the importance of using learning management systems in the process of emerging tacit knowledge is underlined.

Keywords: knowledge management, learning management systems, tacit knowledge, institutional memory

Procedia PDF Downloads 374
8201 Correlations between Obesity Indices and Cardiometabolic Risk Factors in Obese Subgroups in Severely Obese Women

Authors: Seung Hun Lee, Sang Yeoup Lee

Abstract:

Objectives: To investigate associations between degrees of obesity using correlations between obesity indices and cardiometabolic risk factors. Methods: BMI, waist circumference (WC), fasting insulin, fasting glucose, lipids, and visceral adipose tissue (VAT) area using computed tomographic images were measured in 113 obese female without cardiovascular disease (CVD). Correlations between obesity indices and cardiometabolic risk factors were analyzed in obese subgroups defined using sequential obesity indices. Results: Mean BMI and WC were 29.6 kg/m2 and 92.8 cm. BMI showed significant correlations with all five cardiometabolic risk factors until the BMI cut-off point reached 27 kg/m2, but when it exceeded 30 kg/m2, correlations no longer existed. WC was significantly correlated with all five cardiometabolic risk factors up to a value of 85 cm, but when WC exceeded 90 cm, correlations no longer existed. Conclusions: Our data suggest that moderate weight-loss goals may not be enough to ameliorate cardiometabolic markers in severely obese patients. Therefore, individualized weight-loss goals should be recommended to such patients to improve health benefits.

Keywords: correlation, cardiovascular disease, risk factors, obesity

Procedia PDF Downloads 348
8200 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 308
8199 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 119
8198 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 140
8197 Reinforcement Learning Optimization: Unraveling Trends and Advancements in Metaheuristic Algorithms

Authors: Rahul Paul, Kedar Nath Das

Abstract:

The field of machine learning (ML) is experiencing rapid development, resulting in a multitude of theoretical advancements and extensive practical implementations across various disciplines. The objective of ML is to facilitate the ability of machines to perform cognitive tasks by leveraging knowledge gained from prior experiences and effectively addressing complex problems, even in situations that deviate from previously encountered instances. Reinforcement Learning (RL) has emerged as a prominent subfield within ML and has gained considerable attention in recent times from researchers. This surge in interest can be attributed to the practical applications of RL, the increasing availability of data, and the rapid advancements in computing power. At the same time, optimization algorithms play a pivotal role in the field of ML and have attracted considerable interest from researchers. A multitude of proposals have been put forth to address optimization problems or improve optimization techniques within the domain of ML. The necessity of a thorough examination and implementation of optimization algorithms within the context of ML is of utmost importance in order to provide guidance for the advancement of research in both optimization and ML. This article provides a comprehensive overview of the application of metaheuristic evolutionary optimization algorithms in conjunction with RL to address a diverse range of scientific challenges. Furthermore, this article delves into the various challenges and unresolved issues pertaining to the optimization of RL models.

Keywords: machine learning, reinforcement learning, loss function, evolutionary optimization techniques

Procedia PDF Downloads 71