Search results for: interferences and analytical errors
2956 Development of Ecofriendly Ionic Liquid Modified Reverse Phase Liquid Chromatography Method for Simultaneous Determination of Anti-Hyperlipidemic Drugs
Authors: Hassan M. Albishri, Fatimah Al-Shehri, Deia Abd El-Hady
Abstract:
Among the analytical techniques, reverse phase liquid chromatography (RPLC) is currently used in pharmaceutical industry. Ecofriendly analytical chemistry offers the advantages of decreasing the environmental impact with the advantage of increasing operator safety which constituted a topic of industrial interest. Recently, ionic liquids have been successfully used to reduce or eliminate the conventional organic toxic solvents. In the current work, a simple and ecofriendly ionic liquid modified RPLC (IL-RPLC) method has been firstly developed and compared with RPLC under acidic and neutral mobile phase conditions for simultaneous determination of atorvastatin-calcium, rosuvastatin and simvastatin. Several chromatographic effective parameters have been changed in a systematic way. Adequate results have been achieved by mixing ILs with ethanol as a mobile phase under neutral conditions at 1 mL/min flow rate on C18 column. The developed IL-RPLC method has been validated for the quantitative determination of drugs in pharmaceutical formulations. The method showed excellent linearity for analytes in a wide range of concentrations with acceptable precise and accurate data. The current IL-RPLC technique could have vast applications particularly under neutral conditions for simple and greener (bio)analytical applications of pharmaceuticals.Keywords: ionic liquid, RPLC, anti-hyperlipidemic drugs, ecofriendly
Procedia PDF Downloads 2562955 Parameter Estimation of Induction Motors by PSO Algorithm
Authors: A. Mohammadi, S. Asghari, M. Aien, M. Rashidinejad
Abstract:
After emergent of alternative current networks and their popularity, asynchronous motors became more widespread than other kinds of industrial motors. In order to control and run these motors efficiently, an accurate estimation of motor parameters is needed. There are different methods to obtain these parameters such as rotor locked test, no load test, DC test, analytical methods, and so on. The most common drawback of these methods is their inaccuracy in estimation of some motor parameters. In order to remove this concern, a novel method for parameter estimation of induction motors using particle swarm optimization (PSO) algorithm is proposed. In the proposed method, transient state of motor is used for parameter estimation. Comparison of the simulation results purtuined to the PSO algorithm with other available methods justifies the effectiveness of the proposed method.Keywords: induction motor, motor parameter estimation, PSO algorithm, analytical method
Procedia PDF Downloads 6332954 Quality Control of Distinct Cements by IR Spectroscopy: First, insights into Perspectives and Opportunities
Authors: Tobias Bader, Joerg Rickert
Abstract:
One key factor in achieving net zero emissions along the cement and concrete value chain in Europe by 2050 is the use of distinct constituents to produce improved and advanced cements. These cements will contain e.g. calcined clays, recycled concrete fines that are chemically similar as well as X-ray amorphous and therefore difficult to distinguish. This leads to enhanced requirements on the analytical methods for quality control regarding accuracy as well as reproducibility due to the more complex cement composition. With the methods currently provided for in the European standards, it will be a challenge to ensure reliable analyses of the composition of the cements. In an ongoing research project, infrared (IR) spectroscopy in combination with mathematical tools (chemometrics) is going to be evaluated as an additional analytical method with fast and low preparation effort for the characterization of silicate-based cement constituents. The resulting comprehensive database should facilitate determination of the composition of new cements. First results confirmed the applicability of near-infrared IR for the characterization of traditional silicate-based cement constituents (e.g. clinker, granulated blast furnace slag) and modern X-ray amorphous constituents (e.g. calcined clay, recycled concrete fines) as well as different sulfate species (e.g. gypsum, hemihydrate, anhydrite). A multivariant calibration model based on numerous calibration mixtures is in preparation. The final analytical concept to be developed will form the basis for establishing IR spectroscopy as a rapid analytical method for characterizing material flows of known and unknown inorganic substances according to their material properties online and offline. The underlying project was funded by the Federal Institute for Research on Building, Urban Affairs and Spatial Development on behalf of the Federal Ministry of Housing, Urban Development and Building with funds from the ‘Zukunft Bau’ research programme.Keywords: cement, infrared spectroscopy, quality control, X-ray amorphous
Procedia PDF Downloads 392953 Clinical Outcomes of Toric Implantable Collamer Lens (T-ICL) and Toric Implantable Phakic Contact Lens (IPCL) for Correction of High Myopia with Astigmatism: Comparative Study
Authors: Mohamed Salah El-Din Mahmoud, Heba Radi Atta Allah
Abstract:
Background: Our study assesses the safety profile and efficacy of toric Implantable Collamer Lens (T-ICL) and toric implantable phakic contact lens (IPCL) for the correction of high myopia with astigmatism. Methods: A prospective interventional randomized comparative study included 60 myopic eyes divided into 2 groups, group A including 30 eyes that were implanted with T-ICL, and group B including 30 eyes that were implanted with toric IPCL. The refractive results, visual acuity, corneal endothelial cell count, and intraocular pressure (IOP) were evaluated at baseline and at 1, 6, and 9 months post-surgery. Any complications either during or after surgery were assessed. Results: A significant reduction in both spherical and cylindrical refractive errors with good predictability was reported in both groups compared with preoperative values. Regarding the predictability, In T-ICL group (A), the median spherical and cylindrical errors were significantly improved from (-10 D & -4.5 D) pre-operatively to (-0.25 D & - 0.3 D) at the end of 9 months follow up period. Similarly, in the toric IPCL group (B), the median spherical and cylindrical errors were significantly improved from (-11 D & -4.5 D) pre-operatively to (-0.25 D & - 0.3 D) at the end of 9 months follow up period. A statistically significant improvement of UCDVA at 9 months postoperatively was found in both groups, as median preoperative Log Mar UCDVA was 1.1 and 1.3 in groups A and B respectively, which was significantly improved to 0.2 in both groups at the end of follow-up period. Regarding IOP, no significant difference was found between both groups, either pre-operatively or during the postoperative period. Regarding the endothelial count, no significant differences were found during the pre-operative and postoperative follow-up periods between the two groups. Fortunately, no intra or postoperative complications as cataract, keratitis or lens decentration had occurred. Conclusions: Toric IPCL is a suitable alternative to T-ICL for the management of high myopia with astigmatism, especially in developing countries, as it is cheaper and easier for implantation than T-ICL. However, data over longer follow-up periods are needed to confirm its safety and stability.Keywords: T-ICL, Toric IPCL, IOP, corneal endothelium
Procedia PDF Downloads 1482952 Analytical and Statistical Study of the Parameters of Expansive Soil
Authors: A. Medjnoun, R. Bahar
Abstract:
The disorders caused by the shrinking-swelling phenomenon are prevalent in arid and semi-arid in the presence of swelling clay. This soil has the characteristic of changing state under the effect of water solicitation (wetting and drying). A set of geotechnical parameters is necessary for the characterization of this soil type, such as state parameters, physical and chemical parameters and mechanical parameters. Some of these tests are very long and some are very expensive, hence the use or methods of predictions. The complexity of this phenomenon and the difficulty of its characterization have prompted researchers to use several identification parameters in the prediction of swelling potential. This document is an analytical and statistical study of geotechnical parameters affecting the potential of swelling clays. This work is performing on a database obtained from investigations swelling Algerian soil. The obtained observations have helped us to understand the soil swelling structure and its behavior.Keywords: analysis, estimated model, parameter identification, swelling of clay
Procedia PDF Downloads 4172951 Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory
Authors: Ömer Aktaş, Olga A. Suvorova, Oleg Tretyakov
Abstract:
The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.Keywords: analytical regularization method, electromagnetic theory evolutionary equations of time-domain, TM Field
Procedia PDF Downloads 5002950 Multi-Scale Control Model for Network Group Behavior
Authors: Fuyuan Ma, Ying Wang, Xin Wang
Abstract:
Social networks have become breeding grounds for the rapid spread of rumors and malicious information, posing threats to societal stability and causing significant public harm. Existing research focuses on simulating the spread of information and its impact on users through propagation dynamics and applies methods such as greedy approximation strategies to approximate the optimal control solution at the global scale. However, the greedy strategy at the global scale may fall into locally optimal solutions, and the approximate simulation of information spread may accumulate more errors. Therefore, we propose a multi-scale control model for network group behavior, introducing individual and group scales on top of the greedy strategy’s global scale. At the individual scale, we calculate the propagation influence of nodes based on their structural attributes to alleviate the issue of local optimality. At the group scale, we conduct precise propagation simulations to avoid introducing cumulative errors from approximate calculations without increasing computational costs. Experimental results on three real-world datasets demonstrate the effectiveness of our proposed multi-scale model in controlling network group behavior.Keywords: influence blocking maximization, competitive linear threshold model, social networks, network group behavior
Procedia PDF Downloads 212949 True Single SKU Script: Applying the Automated Test to Set Software Properties in a Global Software Development Environment
Authors: Antonio Brigido, Maria Meireles, Francisco Barros, Gaspar Mota, Fernanda Terra, Lidia Melo, Marcelo Reis, Camilo Souza
Abstract:
As the globalization of the software process advances, companies are increasingly committed to improving software development technologies across multiple locations. On the other hand, working with teams distributed in different locations also raises new challenges. In this sense, automated processes can help to improve the quality of process execution. Therefore, this work presents the development of a tool called TSS Script that automates the sample preparation process for carrier requirements validation tests. The objective of the work is to obtain significant gains in execution time and reducing errors in scenario preparation. To estimate the gains over time, the executions performed in an automated and manual way were timed. In addition, a questionnaire-based survey was developed to discover new requirements and improvements to include in this automated support. The results show an average gain of 46.67% of the total hours worked, referring to sample preparation. The use of the tool avoids human errors, and for this reason, it adds greater quality and speed to the process. Another relevant factor is the fact that the tester can perform other activities in parallel with sample preparation.Keywords: Android, GSD, automated testing tool, mobile products
Procedia PDF Downloads 3172948 Method Development and Validation for Quantification of Active Content and Impurities of Clodinafop Propargyl and Its Enantiomeric Separation by High-Performance Liquid Chromatography
Authors: Kamlesh Vishwakarma, Bipul Behari Saha, Sunilkumar Sing, Abhishek Mishra, Sreenivas Rao
Abstract:
A rapid, sensitive and inexpensive method has been developed for complete analysis of Clodinafop Propargyl. Clodinafop Propargyl enantiomers were separated on chiral column, Chiral Pak AS-H (250 mm. 4.6mm x 5µm) with mobile phase n-hexane: IPA (96:4) at flow rate 1.5 ml/min. The effluent was monitored by UV detector at 230 nm. Clodinafop Propagyl content and impurity quantification was done with reverse phase HPLC. The present study describes a HPLC method using simple mobile phase for the quantification of Clodinafop Propargyl and its impurities. The method was validated and found to be accurate, precise, convenient and effective. Moreover, the lower solvent consumption along with short analytical run time led to a cost effective analytical method.Keywords: Clodinafop Propargyl, method, validation, HPLC-UV
Procedia PDF Downloads 3712947 Electrochemically Reduced Graphene Oxide Modified Boron-Doped Diamond Paste Electrode on Paper-Based Analytical Device for Simultaneous Determination of Norepinephrine and Serotonin
Authors: Siriwan Nantaphol, Robert B. Channon, Takeshi Kondo, Weena Siangproh, Orawon Chailapakul, Charles S. Henry
Abstract:
In this work, we demonstrate a novel electrochemically reduced graphene oxide (ERGO) modified boron-doped diamond paste (BDDP) electrode on paper-based analytical devices (PADs) for simultaneous determination of norepinephrine (NE) and serotonin (5-HT). The BDD paste electrode was easily constructed by filling BDD paste in small channels, which made in transparency film sheets using a CO₂ laser etching system. The counter and reference electrodes were fabricated on paper by in-house screen-printing and then combined with BDD paste microelectrode. The electrochemical characterization of the device was investigated by cyclic voltammetry (CV). Differential pulse voltammetry (DPV) was employed for the simultaneous determination of NE and 5-HT. The ERGO-modified BDDP electrode displayed excellent electrocatalytic activities toward the oxidation of NE and 5-HT and strong function for resolving the overlapping voltammetric responses of NE and 5-HT into two well-defined voltammetric peaks. This device was capable of simultaneously detecting NE and 5-HT in wide concentration ranges and with a low limit of detections. In addition, it has the advantages in terms of ease of use, low cost, and disposability.Keywords: boron-doped diamond paste electrode, electrochemically reduced graphene oxide, norepinephrine, paper-based analytical device, serotonin
Procedia PDF Downloads 2592946 AI-Based Technologies for Improving Patient Safety and Quality of Care
Authors: Tewelde Gebreslassie Gebreanenia, Frie Ayalew Yimam, Seada Hussen Adem
Abstract:
Patient safety and quality of care are essential goals of health care delivery, but they are often compromised by human errors, system failures, or resource constraints. In a variety of healthcare contexts, artificial intelligence (AI), a quickly developing field, can provide fresh approaches to enhancing patient safety and treatment quality. Artificial Intelligence (AI) has the potential to decrease errors and enhance patient outcomes by carrying out tasks that would typically require human intelligence. These tasks include the detection and prevention of adverse events, monitoring and warning patients and clinicians about changes in vital signs, symptoms, or risks, offering individualized and evidence-based recommendations for diagnosis, treatment, or prevention, and assessing and enhancing the effectiveness of health care systems and services. This study examines the state-of-the-art and potential future applications of AI-based technologies for enhancing patient safety and care quality, as well as the opportunities and problems they present for patients, policymakers, researchers, and healthcare providers. In order to ensure the safe, efficient, and responsible application of AI in healthcare, the paper also addresses the ethical, legal, social, and technical challenges that must be addressed and regulated.Keywords: artificial intelligence, health care, human intelligence, patient safty, quality of care
Procedia PDF Downloads 782945 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137
Authors: Abdulsalam M. Alhawsawi
Abstract:
Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137
Procedia PDF Downloads 1162944 Customers’ Priority to Implement SSTs Using AHP Analysis
Authors: Mohammad Jafariahangari, Marjan Habibi, Miresmaeil Mirnabibaboli, Mirza Hassan Hosseini
Abstract:
Self-service technologies (SSTs) make an important contribution to the daily life of people nowadays. However, the introduction of SST does not lead to its usage. Thereby, this paper was an attempt on discovery of the most preferred SST in the customers’ point of view. To fulfill this aim, the Analytical Hierarchy Process (AHP) was applied based on Saaty’s questionnaire which was administered to the customers of e-banking services located in Golestan providence, north of Iran. This study used qualitative factors in association with the intention of consumers’ usage of SSTs to rank three SSTs: ATM, mobile banking, and internet banking. The results showed that mobile banking get the highest weight in consumers’ point of view. This research can be useful both for managers and service providers and also for customers who intend to use e-banking.Keywords: analytical hierarchy process, decision-making, e-banking, self-service technologies, Iran
Procedia PDF Downloads 3182943 Comparing Phonological Processes in Persian-Arabic Bilingual Children and Monolingual Children
Authors: Vafa Delphi, Maryam Delphi, Talieh Zarifian, Enayatolah Bakhshi
Abstract:
Background and Aim: Bilingualism is a common phenomenon in many countries of the world and May be consistent consonant errors in the speech of bilingual children. The aim of this study was to evaluate Phonological skills include occurrence proportion, frequency and type of phonological processes in Persian-Arabic speaking children in Ahvaz city, the center of Khuzestan. Method: This study is descriptive-analytical and cross-sectional. Twenty-eight children aged 36-48 months were divided into two groups Persian monolingual and Persian-Arabic bilingual: (14 participants in each group). Sampling was recruited randomly based on inclusion criteria from kindergartens of the Ahvaz city in Iran. The tool of this study was the Persian Phonological Test (PPT), a subtest of Persian Diagnostic Evaluation Articulation and Phonological test. In this test, Phonological processes were investigated in two groups: structure and substitution processes. Data was investigated using SPSS software and the U Mann-Whitney test. Results: The results showed that the proportion occurrence of substitution process was significantly different between two groups of monolingual and bilingual (P=0/001), But the type of phonological processes didn’t show a significant difference in both monolingual and bilingual children of the Persian-Arabic.The frequency of phonological processes is greater in bilingual children than monolingual children. Conclusion: The study showed that bilingualism has no effect on type of phonological processes, but this can be effective on the frequency of processes. Since the type of phonological processes in bilingual children is similar to monolingual children So we can conclude the Persian_arabic bilingual children's phonological system is similar to monolingual children.Keywords: Persian-Arabic bilingual child, phonological processes, the proportion occurrence of syllable structure, the proportion occurrence of substitution
Procedia PDF Downloads 3162942 Development of Interaction Factors Charts for Piled Raft Foundation
Authors: Abdelazim Makki Ibrahim, Esamaldeen Ali
Abstract:
This study aims at analysing the load settlement behavior and predict the bearing capacity of piled raft foundation a series of finite element models with different foundation configurations and stiffness were established. Numerical modeling is used to study the behavior of the piled raft foundation due to the complexity of piles, raft, and soil interaction and also due to the lack of reliable analytical method that can predict the behavior of the piled raft foundation system. Simple analytical models are developed to predict the average settlement and the load sharing between the piles and the raft in piled raft foundation system. A simple example to demonstrate the applications of these charts is included.Keywords: finite element, pile-raft foundation, method, PLAXIS software, settlement
Procedia PDF Downloads 5572941 Integrating GIS and Analytical Hierarchy Process-Multicriteria Decision Analysis for Identification of Suitable Areas for Artificial Recharge with Reclaimed Water
Authors: Mahmoudi Marwa, Bahim Nadhem, Aydi Abdelwaheb, Issaoui Wissal, S. Najet
Abstract:
This work represents a coupling between the geographic information system (GIS) and the multicriteria analysis aiming at the selection of an artificial recharge site by the treated wastewater for the Ariana governorate. On regional characteristics, bibliography and available data on artificial recharge, 13 constraints and 5 factors were hierarchically structured for the adequacy of an artificial recharge. The factors are subdivided into two main groups: environmental factors and economic factors. The adopted methodology allows a preliminary assessment of a recharge site, the weighted linear combination (WLC) and the analytical hierarchy process (AHP) in a GIS. The standardization of the criteria is carried out by the application of the different membership functions. The form and control points of the latter are defined by the consultation of the experts. The weighting of the selected criteria is allocated according to relative importance using the AHP methodology. The weighted linear combination (WLC) integrates the different criteria and factors to delineate the most suitable areas for artificial recharge site selection by treated wastewater. The results of this study showed three potential candidate sites that appear when environmental factors are more important than economic factors. These sites are ranked in descending order using the ELECTRE III method. Nevertheless, decision making for the selection of an artificial recharge site will depend on the decision makers in force.Keywords: artificial recharge site, treated wastewater, analytical hierarchy process, ELECTRE III
Procedia PDF Downloads 1662940 Root Cause Analysis of Excessive Vibration in a Feeder Pump of a Large Thermal Electric Power Plant: A Simulation Approach
Authors: Kavindan Balakrishnan
Abstract:
Root cause Identification of the Vibration phenomenon in a feedwater pumping station was the main objective of this research. First, the mode shapes of the pumping structure were investigated using numerical and analytical methods. Then the flow pressure and streamline distribution in the pump sump were examined using C.F.D. simulation, which was hypothesized can be a cause of vibration in the pumping station. As the problem specification of this research states, the vibration phenomenon in the pumping station, with four parallel pumps operating at the same time and heavy vibration recorded even after several maintenance steps. They also specified that a relatively large amplitude of vibration exited by pumps 1 and 4 while others remain normal. As a result, the focus of this research was on determining the cause of such a mode of vibration in the pump station with the assistance of Finite Element Analysis tools and Analytical methods. Major outcomes were observed in structural behavior which is favorable to the vibration pattern phenomenon in the pumping structure as a result of this research. Behaviors of the numerical and analytical models of the pump structure have similar characteristics in their mode shapes, particularly in their 2nd mode shape, which is considerably related to the exact cause of the research problem statement. Since this study reveals several possible points of flow visualization in the pump sump model that can be a favorable cause of vibration in the system, there is more room for improved investigation on flow conditions relating to pump vibrations.Keywords: vibration, simulation, analysis, Ansys, Matlab, mode shapes, pressure distribution, structure
Procedia PDF Downloads 1242939 Analytical Hierarchical Process for Multi-Criteria Decision-Making
Authors: Luis Javier Serrano Tamayo
Abstract:
This research on technology makes a first approach to the selection of an amphibious landing ship with strategic capabilities, through the implementation of a multi-criteria model using Analytical Hierarchical Process (AHP), in which a significant group of alternatives of latest technology has been considered. The variables were grouped at different levels to match design and performance characteristics, which affect the lifecycle as well as the acquisition, maintenance and operational costs. The model yielded an overall measure of effectiveness and an overall measure of cost of each kind of ship that was compared each other inside the model and showed in a Pareto chart. The modeling was developed using the Expert Choice software, based on AHP method.Keywords: analytic hierarchy process, multi-criteria decision-making, Pareto analysis, Colombian Marine Corps, projection operations, expert choice, amphibious landing ship
Procedia PDF Downloads 5492938 Study Variation of Blade Angle on the Performance of the Undershot Waterwheel on the Pico Scale
Authors: Warjito, Kevin Geraldo, Budiarso, Muhammad Mizan, Rafi Adhi Pranata, Farhan Rizqi Syahnakri
Abstract:
According to data from 2021, the number of households in Indonesia that have access to on-grid electricity is claimed to have reached 99.28%, which means that around 0.7% of Indonesia's population (1.95 million people) still have no proper access to electricity and 38.1% of it comes from remote areas in Nusa Tenggara Timur. Remote areas are classified as areas with a small population of 30 to 60 families, have limited infrastructure, have scarce access to electricity and clean water, have a relatively weak economy, are behind in access to technological innovation, and earn a living mostly as farmers or fishermen. These people still need electricity but can’t afford the high cost of electricity from national on-grid sources. To overcome this, it is proposed that a hydroelectric power plant driven by a pico-hydro turbine with an undershot water wheel will be a suitable pico-hydro turbine technology because of the design, materials and installation of the turbine that is believed to be easier (i.e., operational and maintenance) and cheaper (i.e., investment and operating costs) than any other type. The comparative study of the angle of the undershot water wheel blades will be discussed comprehensively. This study will look into the best variation of curved blades on an undershot water wheel that produces maximum hydraulic efficiency. In this study, the blade angles were varied by 180 ̊, 160 ̊, and 140 ̊. Two methods of analysis will be used, which are analytical and numerical methods. The analytical method will be based on calculations of the amount of torque and rotational speed of the turbine, which is used to obtain the input and output power of the turbine. Whereas the numerical method will use the ANSYS application to simulate the flow during the collision with the designed turbine blades. It can be concluded, based on the analytical and numerical methods, that the best angle for the blade is 140 ̊, with an efficiency of 43.52% for the analytical method and 37.15% for the numerical method.Keywords: pico hydro, undershot waterwheel, blade angle, computational fluid dynamics
Procedia PDF Downloads 772937 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 1432936 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs
Authors: Junaid Mahmood Alam
Abstract:
Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.Keywords: revalidation, standardized, IFCC, CAP, harmonized
Procedia PDF Downloads 2692935 Examining the Changes in Complexity, Accuracy, and Fluency in Japanese L2 Writing Over an Academic Semester
Authors: Robert Long
Abstract:
The results of a one-year study on the evolution of complexity, accuracy, and fluency (CAF) in the compositions of Japanese L2 university students throughout a semester are presented in this study. One goal was to determine if any improvement in writing abilities over this academic term had occurred, while another was to examine methods of editing. Participants had 30 minutes to write each essay with an additional 10 minutes allotted for editing. As for editing, participants were divided into two groups, one of which utilized an online grammar checker, while the other half self-edited their initial manuscripts. From the three different institutions, there was a total of 159 students. Research questions focused on determining if the CAF had evolved over the previous year, identifying potential variations in editing techniques, and describing the connections between the CAF dimensions. According to the findings, there was some improvement in accuracy (fewer errors) in all three of the measures), whereas there was a marked decline in complexity and fluency. As for the second research aim relating to the interaction among the three dimensions (CAF) and of possible increases in fluency being offset by decreases in grammatical accuracy, results showed (there is a logical high correlation with clauses and word counts, and mean length of T-unit (MLT) and (coordinate phrase of T-unit (CP/T) as well as MLT and clause per T-unit (C/T); furthermore, word counts and error/100 ratio correlated highly with error-free clause totals (EFCT). Issues of syntactical complexity had a negative correlation with EFCT, indicating that more syntactical complexity relates to decreased accuracy. Concerning a difference in error correction between those who self-edited and those who used an online grammar correction tool, results indicated that the variable of errors-free clause ratios (EFCR) had the greatest difference regarding accuracy, with fewer errors noted with writers using an online grammar checker. As for possible differences between the first and second (edited) drafts regarding CAF, results indicated there were positive changes in accuracy, the most significant change seen in complexity (CP/T and MLT), while there were relatively insignificant changes in fluency. Results also indicated significant differences among the three institutions, with Fujian University of Technology having the most fluency and accuracy. These findings suggest that to raise students' awareness of their overall writing development, teachers should support them in developing more complex syntactic structures, improving their fluency, and making more effective use of online grammar checkers.Keywords: complexity, accuracy, fluency, writing
Procedia PDF Downloads 392934 Saltwater Intrusion Studies in the Cai River in the Khanh Hoa Province, Vietnam
Authors: B. Van Kessel, P. T. Kockelkorn, T. R. Speelman, T. C. Wierikx, C. Mai Van, T. A. Bogaard
Abstract:
Saltwater intrusion is a common problem in estuaries around the world, as it could hinder the freshwater supply of coastal zones. This problem is likely to grow due to climate change and sea-level rise. The influence of these factors on the saltwater intrusion was investigated for the Cai River in the Khanh Hoa province in Vietnam. In addition, the Cai River has high seasonal fluctuations in discharge, leading to increased saltwater intrusion during the dry season. Sea level rise, river discharge changes, river mouth widening and a proposed saltwater intrusion prevention dam can have influences on the saltwater intrusion but have not been quantified for the Cai River estuary. This research used both an analytical and numerical model to investigate the effect of the aforementioned factors. The analytical model was based on a model proposed by Savenije and was calibrated using limited in situ data. The numerical model was a 3D hydrodynamic model made using the Delft3D4 software. The analytical model and numerical model agreed with in situ data, mostly for tidally average data. Both models indicated a roughly similar dependence on discharge, also agreeing that this parameter had the most severe influence on the modeled saltwater intrusion. Especially for discharges below 10 m/s3, the saltwater was predicted to reach further than 10 km. In the models, both sea-level rise and river widening mainly resulted in salinity increments up to 3 kg/m3 in the middle part of the river. The predicted sea-level rise in 2070 was simulated to lead to an increase of 0.5 km in saltwater intrusion length. Furthermore, the effect of the saltwater intrusion dam seemed significant in the model used, but only for the highest position of the gate.Keywords: Cai River, hydraulic models, river discharge, saltwater intrusion, tidal barriers
Procedia PDF Downloads 1122933 Time Series Forecasting (TSF) Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window
Procedia PDF Downloads 1542932 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending
Authors: Yang Zheng, Wei Sun
Abstract:
This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b → 0) and plane strain (b → ∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.Keywords: bending, creep, thin plate, materials engineering
Procedia PDF Downloads 4742931 Blood Flow in Stenosed Arteries: Analytical and Numerical Study
Authors: Shashi Sharma, Uaday Singh, V. K. Katiyar
Abstract:
Blood flow through a stenosed tube, which is of great interest to mechanical engineers as well as medical researchers. If stenosis exists in an artery, normal blood flow is disturbed. The deposition of fatty substances, cholesterol, cellular waste products in the inner lining of an artery results to plaque formation .The present study deals with a mathematical model for blood flow in constricted arteries. Blood is considered as a Newtonian, incompressible, unsteady and laminar fluid flowing in a cylindrical rigid tube along the axial direction. A time varying pressure gradient is applied in the axial direction. An analytical solution is obtained using the numerical inversion method for Laplace Transform for calculating the velocity profile of fluid as well as particles.Keywords: blood flow, stenosis, Newtonian fluid, medical biology and genetics
Procedia PDF Downloads 5162930 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space
Authors: Nanjiang Chen
Abstract:
In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation
Procedia PDF Downloads 462929 New Analytical Current-Voltage Model for GaN-based Resonant Tunneling Diodes
Authors: Zhuang Guo
Abstract:
In the field of GaN-based resonant tunneling diodes (RTDs) simulations, the traditional Tsu-Esaki formalism failed to predict the values of peak currents and peak voltages in the simulated current-voltage(J-V) characteristics. The main reason is that due to the strong internal polarization fields, two-dimensional electron gas(2DEG) accumulates at emitters, resulting in 2D-2D resonant tunneling currents, which become the dominant parts of the total J-V characteristics. By comparison, based on the 3D-2D resonant tunneling mechanism, the traditional Tsu-Esaki formalism cannot predict the J-V characteristics correctly. To overcome this shortcoming, we develop a new analytical model for the 2D-2D resonant tunneling currents generated in GaN-based RTDs. Compared with Tsu-Esaki formalism, the new model has made the following modifications: Firstly, considering the Heisenberg uncertainty, the new model corrects the expression of the density of states around the 2DEG eigenenergy levels at emitters so that it could predict the half width at half-maximum(HWHM) of resonant tunneling currents; Secondly, taking into account the effect of bias on wave vectors on the collectors, the new model modifies the expression of the transmission coefficients which could help to get the values of peak currents closer to the experiment data compared with Tsu-Esaki formalism. The new analytical model successfully predicts the J-V characteristics of GaN-based RTDs, and it also reveals more detailed mechanisms of resonant tunneling happened in GaN-based RTDs, which helps to design and fabricate high-performance GaN RTDs.Keywords: GaN-based resonant tunneling diodes, tsu-esaki formalism, 2D-2D resonant tunneling, heisenberg uncertainty
Procedia PDF Downloads 762928 Preliminary Study of the Phonological Development in Three and Four Year Old Bulgarian Children
Authors: Tsvetomira Braynova, Miglena Simonska
Abstract:
The article presents the results of research on phonological processes in three and four-year-old children. For the purpose of the study, an author's test was developed and conducted among 120 children. The study included three areas of research - at the level of words (96 words), at the level of sentence repetition (10 sentences) and at the level of generating own speech from a picture (15 pictures). The test also gives us additional information about the articulation errors of the assessed children. The main purpose of the icing is to analyze all phonological processes that occur at this age in Bulgarian children and to identify which are typical and atypical for this age. The results show that the most common phonology errors that children make are: sound substitution, an elision of sound, metathesis of sound, elision of a syllable, and elision of consonants clustered in a syllable. All examined children were identified with the articulatory disorder from type bilabial lambdacism. Measuring the correlation between the average length of repeated speech and the average length of generated speech, the analysis proves that the more words a child can repeat in part “repeated speech,” the more words they can be expected to generate in part “generating sentence.” The results of this study show that the task of naming a word provides sufficient and representative information to assess the child's phonology.Keywords: assessment, phonology, articulation, speech-language development
Procedia PDF Downloads 1862927 Efficient Model Order Reduction of Descriptor Systems Using Iterative Rational Krylov Algorithm
Authors: Muhammad Anwar, Ameen Ullah, Intakhab Alam Qadri
Abstract:
This study presents a technique utilizing the Iterative Rational Krylov Algorithm (IRKA) to reduce the order of large-scale descriptor systems. Descriptor systems, which incorporate differential and algebraic components, pose unique challenges in Model Order Reduction (MOR). The proposed method partitions the descriptor system into polynomial and strictly proper parts to minimize approximation errors, applying IRKA exclusively to the strictly adequate component. This approach circumvents the unbounded errors that arise when IRKA is directly applied to the entire system. A comparative analysis demonstrates the high accuracy of the reduced model and a significant reduction in computational burden. The reduced model enables more efficient simulations and streamlined controller designs. The study highlights IRKA-based MOR’s effectiveness in optimizing complex systems’ performance across various engineering applications. The proposed methodology offers a promising solution for reducing the complexity of large-scale descriptor systems while maintaining their essential characteristics and facilitating their analysis, simulation, and control design.Keywords: model order reduction, descriptor systems, iterative rational Krylov algorithm, interpolatory model reduction, computational efficiency, projection methods, H₂-optimal model reduction
Procedia PDF Downloads 31