Search results for: radial error
373 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 217372 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc
Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez
Abstract:
The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.Keywords: BLER, LTE, network, qualipoc, SNR.
Procedia PDF Downloads 114371 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 231370 A Hybrid Block Multistep Method for Direct Numerical Integration of Fourth Order Initial Value Problems
Authors: Adamu S. Salawu, Ibrahim O. Isah
Abstract:
Direct solution to several forms of fourth-order ordinary differential equations is not easily obtained without first reducing them to a system of first-order equations. Thus, numerical methods are being developed with the underlying techniques in the literature, which seeks to approximate some classes of fourth-order initial value problems with admissible error bounds. Multistep methods present a great advantage of the ease of implementation but with a setback of several functions evaluation for every stage of implementation. However, hybrid methods conventionally show a slightly higher order of truncation for any k-step linear multistep method, with the possibility of obtaining solutions at off mesh points within the interval of solution. In the light of the foregoing, we propose the continuous form of a hybrid multistep method with Chebyshev polynomial as a basis function for the numerical integration of fourth-order initial value problems of ordinary differential equations. The basis function is interpolated and collocated at some points on the interval [0, 2] to yield a system of equations, which is solved to obtain the unknowns of the approximating polynomial. The continuous form obtained, its first and second derivatives are evaluated at carefully chosen points to obtain the proposed block method needed to directly approximate fourth-order initial value problems. The method is analyzed for convergence. Implementation of the method is done by conducting numerical experiments on some test problems. The outcome of the implementation of the method suggests that the method performs well on problems with oscillatory or trigonometric terms since the approximations at several points on the solution domain did not deviate too far from the theoretical solutions. The method also shows better performance compared with an existing hybrid method when implemented on a larger interval of solution.Keywords: Chebyshev polynomial, collocation, hybrid multistep method, initial value problems, interpolation
Procedia PDF Downloads 122369 Fem Models of Glued Laminated Timber Beams Enhanced by Bayesian Updating of Elastic Moduli
Authors: L. Melzerová, T. Janda, M. Šejnoha, J. Šejnoha
Abstract:
Two finite element (FEM) models are presented in this paper to address the random nature of the response of glued timber structures made of wood segments with variable elastic moduli evaluated from 3600 indentation measurements. This total database served to create the same number of ensembles as was the number of segments in the tested beam. Statistics of these ensembles were then assigned to given segments of beams and the Latin Hypercube Sampling (LHS) method was called to perform 100 simulations resulting into the ensemble of 100 deflections subjected to statistical evaluation. Here, a detailed geometrical arrangement of individual segments in the laminated beam was considered in the construction of two-dimensional FEM model subjected to in four-point bending to comply with the laboratory tests. Since laboratory measurements of local elastic moduli may in general suffer from a significant experimental error, it appears advantageous to exploit the full scale measurements of timber beams, i.e. deflections, to improve their prior distributions with the help of the Bayesian statistical method. This, however, requires an efficient computational model when simulating the laboratory tests numerically. To this end, a simplified model based on Mindlin’s beam theory was established. The improved posterior distributions show that the most significant change of the Young’s modulus distribution takes place in laminae in the most strained zones, i.e. in the top and bottom layers within the beam center region. Posterior distributions of moduli of elasticity were subsequently utilized in the 2D FEM model and compared with the original simulations.Keywords: Bayesian inference, FEM, four point bending test, laminated timber, parameter estimation, prior and posterior distribution, Young’s modulus
Procedia PDF Downloads 283368 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System
Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze
Abstract:
Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking
Procedia PDF Downloads 146367 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA
Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell
Abstract:
Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis
Procedia PDF Downloads 230366 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 59365 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion
Procedia PDF Downloads 207364 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 17363 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Egypt: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), CO2 emissions and gross domestic product (GDP) for Egypt using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen maximum likelihood method for co-integration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests some negative impacts of the CO2 emissions and the coal and natural gas use on the GDP. Conversely, a positive long-run causality from the electricity consumption to the GDP is found to be significant in Egypt during the period. In the short-run, some positive unidirectional causalities exist, running from the coal consumption to the GDP, and the CO2 emissions and the natural gas use. Further, the GDP and the electricity use are positively influenced by the consumption of petroleum products and the direct combustion of crude oil. Overall, the results support arguments that there are relationships among environmental quality, energy use, and economic output in both the short term and long term; however, the effects may differ due to the sources of energy, such as in the case of Egypt for the period of 1980-2010.Keywords: CO2 emissions, Egypt, energy consumption, GDP, time series analysis
Procedia PDF Downloads 615362 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 186361 The Effect of Artificial Intelligence on Electric Machines and Welding
Authors: Mina Malak Zakaria Henin
Abstract:
The finite detail evaluation of magnetic fields in electromagnetic devices shows that the machine cores revel in extraordinary flux patterns consisting of alternating and rotating fields. The rotating fields are generated in different configurations variety, among circular and elliptical, with distinctive ratios between the fundamental and minor axes of the flux locus. Experimental measurements on electrical metal uncovered one-of-a-kind flux patterns that divulge distinctive magnetic losses in the samples below the test. Therefore, electric machines require unique interest throughout the core loss calculation technique to bear in mind the flux styles. In this look, a circular rotational unmarried sheet tester is employed to measure the middle losses in the electric-powered metallic pattern of M36G29. The sample becomes exposed to alternating fields, circular areas, and elliptical fields with axis ratios of zero.2, zero. Four, 0.6 and 0.8. The measured statistics changed into applied on 6-4 switched reluctance motors at 3 distinctive frequencies of interest to the industry 60 Hz, 400 Hz, and 1 kHz. The effects reveal an excessive margin of error, which can arise at some point in the loss calculations if the flux pattern difficulty is overlooked. The mistake in exceptional components of the gadget associated with considering the flux styles may be around 50%, 10%, and a couple of at 60Hz, 400Hz, and 1 kHz, respectively. The future paintings will focus on the optimization of gadget geometrical shape, which has a primary effect on the flux sample on the way to decrease the magnetic losses in system cores.Keywords: converters, electric machines, MEA (more electric aircraft), PES (power electronics systems) synchronous machine, vector control Multi-machine/ Multi-inverter, matrix inverter, Railway tractionalternating core losses, finite element analysis, rotational core losses
Procedia PDF Downloads 28360 Optimization of Biomass Components from Rice Husk Treated with Trichophyton Soudanense and Trichophyton Mentagrophyte and Effect of Yeast on the Bio-Ethanol Yield
Authors: Chukwuma S. Ezeonu, Ikechukwu N. E. Onwurah, Uchechukwu U. Nwodo, Chibuike S. Ubani, Chigozie M. Ejikeme
Abstract:
Trichophyton soudanense and Trichophyton mentagrophyte were isolated from the rice mill environment, cultured and used singly and as di-culture in the treatment of measure quantities of preheated rice husk. Optimized conditions studied showed that carboxymethylcellulase (CMCellulase) activity of 57.61 µg/ml/min was optimum for Trichophyton mentagrophyte heat pretreated rice husk crude enzymes at 50oC and 80oC respectively. Duration of 120 hours (5 days) gave the highest CMcellulase activity of 75.84 µg/ml/min for crude enzyme of Trichophyton mentagrophyte heat pretreated rice husk. However, 96 hours (4 days) duration gave maximum activity of 58.21 µg/ml/min for crude enzyme of Trichophyton soudanense heat pretreated rice husk. Highest CMCellulase activities of 67.02 µg/ml/min and 69.02 µg/ml/min at pH of 5 were recorded for crude enzymes of monocultures of Trichophyton soudanense (TS) and Trichophyton mentagrophyte (TM) heat pretreated rice husk respectively. Biomass components showed that rice husk cooled after heating followed by treatment with Trichophyton mentagrophyte gave 44.50 ± 10.90 (% ± Standard Error of Mean) cellulose as the highest yield. Maximum total lignin value of 28.90 ± 1.80 (% ± SEM) was obtained from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM). The hemicellulose content of 30.50 ± 2.12 (% ± SEM) from pre-heated rice husk treated with Trichophyton soudanense (TS); lignin value of 28.90 ± 1.80 from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM); also carbohydrate content of 16.79 ± 9.14 (% ± SEM) , reducing and non-reducing sugar values of 2.66 ± 0.45 and 14.13 ± 8.69 (% ± SEM) were all obtained from for pre- heated rice husk treated with Trichophyton mentagrophyte (TM). All the values listed above were the highest values obtained from each rice husk treatment. The pre-heated rice husk treated with Trichophyton mentagrophyte (TM) fermented with palmwine yeast gave bio-ethanol value of 11.11 ± 0.21 (% ± Standard Deviation) as the highest yield.Keywords: Trichophyton soudanense, Trichophyton mentagrophyte, biomass, bioethanol, rice husk
Procedia PDF Downloads 679359 Prospective Cohort Study on Sequential Use of Catheter with Misoprostol vs Misoprostol Alone for Second Trimester Medical Abortion
Authors: Hanna Teklu Gebregziabher
Abstract:
Background: A variety of techniques for medical termination of second-trimester pregnancy can be used, but there is no consensus about which is the best. Even though most evidence suggests the combined use of intracervical Foley catheter and vaginal misoprostol is safe, effective, and acceptable method for termination of second-trimester pregnancy, which is comparable to mifepristone-misoprostol combination regimen with lower cost and no additional maternal risks. The use of mifepristone and misoprostol alone with no other procedure is still the most common procedure in different institutions for 2nd-trimester pregnancy. Methods: A cross-sectional comparative prospective study design is employed on women who were admitted for 2nd-trimester medical abortion and medical abortion failed or if there was no change in cervical status after 24 hours of 1st dose of misoprostol. The study was conducted at St. Paulose Hospital Millennium Medical College. A sample of 44 participants in each arm was necessary to give a two-tailed test, a type 1 error of 5%, 80% statistical power, and a 1:1 ratio among groups. Thus, a total of 94 cases, 47 from each arm, were recruited. Data was entered and cleaned by using Epi-info and analyzed using SPSS version 29.0 statistical software and was presented in descriptive and tabular forms. Different variables were cross-tabulated and compared for significant differences and statistical analysis using the chi-square test and independent t-test, to conclude. Result: There was a significant difference between the two groups on induction to expulsion time and number of doses used. The mean ± SD of induction to expulsion time for those used misoprostol alone was 48.09 ± 11.86 and those who used trans-cervical catheter sequentially with misoprostol were 36.7 ±6.772. Conclusion: The use of a trans-cervical Foley catheter in conjunction with misoprostol in a sequential manner is a more effective, safe, and easily accessible procedure. In addition, the cost of utilizing the catheter is less compared to the cost of misoprostol and is readily available. As a good substitute, we advised using Trans-cervical Catether even for medical abortions performed in the second trimester.Keywords: second trimester, medical abortion, catheter, misoprostol
Procedia PDF Downloads 45358 Determinants of Success of University Industry Collaboration in the Science Academic Units at Makerere University
Authors: Mukisa Simon Peter Turker, Etomaru Irene
Abstract:
This study examined factors determining the success of University-Industry Collaboration (UIC) in the science academic units (SAUs) at Makerere University. This was prompted by concerns about weak linkages between industry and the academic units at Makerere University. The study examined institutional, relational, output, and framework factors determining the success of UIC in the science academic units at Makerere University. The study adopted a predictive cross-sectional survey design. Data was collected using a questionnaire survey from 172 academic staff from the six SAUs at Makerere University. Stratified, proportionate, and simple random sampling techniques were used to select the samples. The study used descriptive statistics and linear multiple regression analysis to analyze data. The study findings reveal a coefficient of determination (R-square) of 0.403 at a significance level of 0.000, suggesting that UIC success was 40.3% at a standardized error of estimate of 0.60188. The strength of association between Institutional factors, Relational factors, Output factors, and Framework factors, taking into consideration all interactions among the study variables, was at 64% (R= 0.635). Institutional, Relational, Output and Framework factors accounted for 34% of the variance in the level of UIC success (adjusted R2 = 0.338). The remaining variance of 66% is explained by factors other than Institutional, Relational, Output, and Framework factors. The standardized coefficient statistics revealed that Relational factors (β = 0.454, t = 5.247, p = 0.000) and Framework factors (β = 0.311, t = 3.770, p = 0.000) are the only statistically significant determinants of the success of UIC in the SAU in Makerere University. Output factors (β = 0.082, t =1.096, p = 0.275) and Institutional factors β = 0.023, t = 0.292, p = 0.771) turned out to be statistically insignificant determinants of the success of UIC in the science academic units at Makerere University. The study concludes that Relational Factors and Framework Factors positively and significantly determine the success of UIC, but output factors and institutional factors are not statistically significant determinants of UIC in the SAUs at Makerere University. The study recommends strategies to consolidate Relational and Framework Factors to enhance UIC at Makerere University and further research on the effects of Institutional and Output factors on the success of UIC in universities.Keywords: university-industry collaboration, output factors, relational factors, framework factors, institutional factors
Procedia PDF Downloads 61357 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory
Authors: Liu Canqi, Zeng Junsheng
Abstract:
This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay
Procedia PDF Downloads 67356 A Survey on Students' Intentions to Dropout and Dropout Causes in Higher Education of Mongolia
Authors: D. Naranchimeg, G. Ulziisaikhan
Abstract:
Student dropout problem has not been recently investigated within the Mongolian higher education. A student dropping out is a personal decision, but it may cause unemployment and other social problems including low quality of life because students who are not completed a degree cannot find better-paid jobs. The research aims to determine percentage of at-risk students, and understand reasons for dropouts and to find a way to predict. The study based on the students of the Mongolian National University of Education including its Arkhangai branch school, National University of Mongolia, Mongolian University of Life Sciences, Mongolian University of Science and Technology, Mongolian National University of Medical Science, Ikh Zasag International University, and Dornod University. We conducted the paper survey by method of random sampling and have surveyed about 100 students per university. The margin of error - 4 %, confidence level -90%, and sample size was 846, but we excluded 56 students from this study. Causes for exclusion were missing data on the questionnaire. The survey has totally 17 questions, 4 of which was demographic questions. The survey shows that 1.4% of the students always thought to dropout whereas 61.8% of them thought sometimes. Also, results of the research suggest that students’ dropouts from university do not have relationships with their sex, marital and social status, and peer and faculty climate, whereas it slightly depends on their chosen specialization. Finally, the paper presents the reasons for dropping out provided by the students. The main two reasons for dropouts are personal reasons related with choosing wrong study program, not liking the course they had chosen (50.38%), and financial difficulties (42.66%). These findings reveal the importance of early prevention of dropout where possible, combined with increased attention to high school students in choosing right for them study program, and targeted financial support for those who are at risk.Keywords: at risk students, dropout, faculty climate, Mongolian universities, peer climate
Procedia PDF Downloads 397355 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 114354 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys
Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz
Abstract:
There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys
Procedia PDF Downloads 243353 EFL Teachers’ Sequential Self-Led Reflection and Possible Modifications in Their Classroom Management Practices
Authors: Sima Modirkhameneh, Mohammad Mohammadpanah
Abstract:
In the process of EFL teachers’ development, self-led reflection (SLR) is thought to have an imperative role because it may help teachers analyze, evaluate, and contemplate what is happening in their classes. Such contemplations can not only enhance the quality of their instruction and provide better learning environments for learners but also improve the quality of their classroom management (CM). Accordingly, understanding the effect of teachers’ SLR practices may help us gain valuable insights into what possible modifications SLR may bring about in all aspects of EFL teachers' practitioners, especially their CM. The main purpose of this case study was, thus, to investigate the impact of SLR practices of 12 Iranian EFL teachers on their CM based on the universal classroom management checklist (UCMC). In addition, another objective of the current study was to have a clear image of EFL teachers’ perceptions of their own SLR practices and their possible outcomes. By conducting repeated reflective interviews, observations, and feedback of the participants over five teaching sessions, the researcher analyzed the outcomes qualitatively through the process of meaning categorization and data interpretation based on the principles of Grounded Theory. The results demonstrated that EFL teachers utilized SLR practices to improve different aspects of their language teaching skills and CM in different contexts. Almost all participants had positive comments and reactions about the effect of SLR on their CM procedures in different aspects (expectations and routines, behavior-specific praise, error corrections, prompts and precorrections, opportunity to respond, strengths and weaknesses of CM, teachers’ perception, CM ability, and learning process). Otherwise stated, results implied that familiarity with the UCMC criteria and reflective practices contributes to modifying teacher participants’ perceptions about their CM procedure and utilizing the reflective practices in their teaching styles. The results are thought to be valuably beneficial for teachers, teacher educators, and policymakers, who are recommended to pay special attention to the contributions as well as the complexity of reflective teaching. The study concludes with more detailed results and implications and useful directions for future research.Keywords: classroom management, EFL teachers, reflective practices, self-led reflection
Procedia PDF Downloads 54352 Assessment of Ocular Morbidity, Knowledge and Barriers to Access Eye Care Services among the Children Live in Offshore Island, Bangladesh
Authors: Abir Dey, Shams Noman
Abstract:
Introduction: Offshore Island is the remote and isolated area from the terrestrial mainland. They are deprived of their needs. The children from an offshore island are usually underserved in the case of health care because it is a remote area where the health care systems are quite poor compared to mainland. So, the proper information is required for appropriate planning to reduce underlying causes behind visual deprivation among the surviving children of the Offshore Island. Purpose: The purpose of this study was to determine ocular morbidities, knowledge, and barriers of eye care services among children in an Offshore Island. Methods: The study team visited, and all data were collected from different rural communities at Sandwip Upazila, Chittagong district for screening the children aged 5-16 years old by doing spot examination. The whole study was conducted in both qualitative and quantitative methods. To determine ocular status of children, examinations were done under skilled Ophthalmologists and Optometrists. A focus group discussion was held. The sample size was 490. It was a community based descriptive study and the sampling method was purposive sampling. Results: In total 490 children, about 56.90% were female and 43.10% were male. Among them 456 were school-going children (93.1%) and 34 were non-school going children (6.9%). In this study the most common ocular morbidity was Allergic Conjunctivitis (35.2%). Other mentionable ocular morbidities were Refractive error (27.7%), Blepharitis (13.8%), Meibomian Gland Dysfunction (7.5%), Strabismus (6.3%) and Amblyopia (6.3%). Most of the non-school going children were involved in different types of domestic work like farming, fishing, etc. About 90.04% children who had different ocular abnormalities could not attend to the doctor due to various reasons. Conclusions: The ocular morbidity was high in rate on the offshore island. Eye health care facility was also not well established there. Awareness should be raised about necessity of maintaining hygiene and eye healthcare among the island people. Timely intervention through available eye care facilities and management can reduce the ocular morbidity rate in that area.Keywords: morbidities, screening, barriers, offshore island, knowledge
Procedia PDF Downloads 159351 Exclusive Breastfeeding Abandonment among Adolescent Mothers: A Cohort Study
Authors: Maria I. Nuñez-Hernández, Maria L. Riesco
Abstract:
Background: Exclusive breastfeeding (EBF) up to 6 months old infant have been considered one of the most important factors in the overall development of children. Nevertheless, as resources are scarce, it is essential to identify the most vulnerable groups that have major risk of EBF abandonment, in order to deliver the best strategies. Children of adolescent mothers are within these groups. Aims: To determine the EBF abandonment rate among adolescent mothers and to analyze the associated factors. Methods: Prospective cohort study of adolescent mothers in the southern area of Santiago, Chile, conducted in primary care services of public health system. The cohort was established from 2014 to 2015, with a sample of 105 adolescent mothers and their children at 2 months of life. The inclusion criteria were: adolescent mother from 14 to 19 years old; not twin babies; mother and baby leaving the hospital together after birthchild; correct attachment of the baby to the breast; no difficulty understanding the Spanish language or communicating. Follow-up was performed at 4 and 6 months old infant. Data were collected by interviews, considering EBF as breastfeeding only, without adding other milk, tea, juice, water or other product that not breast milk, except drugs. Data were analyzed by descriptive and inferential statistics, by Kaplan-Meier estimator and Log-Rank test, admitting the probability of occurrence of type I error of 5% (p-value = 0.05). Results: The cumulative EBF abandonment rate at 2, 4 and 6 months was 33.3%, 52.2% and 63.8%, respectively. Factors associated with EBF abandonment were maternal perception of the quality of milk as poor (p < 0.001), maternal perception that the child was not satisfied after breastfeeding (p < 0.001), use of pacifier (p < 0.001), maternal consumption of illicit drugs after delivery (p < 0.001), mother return to school (p = 0.040) and presence of nipple trauma (p = 0.045). Conclusion: EBF abandonment rate was higher in the first 4 months of life and is superior to the population of women who breastfeed. Among the EBF abandonment factors, one of them is related to the adolescent condition, and two are related to the maternal subjective perception.Keywords: adolescent, breastfeeding, midwifery, nursing
Procedia PDF Downloads 321350 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice
Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha
Abstract:
Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability
Procedia PDF Downloads 117349 Dutch Disease and Industrial Development: An Investigation of the Determinants of Manufacturing Sector Performance in Nigeria
Authors: Kayode Ilesanmi Ebenezer Bowale, Dominic Azuh, Busayo Aderounmu, Alfred Ilesanmi
Abstract:
There has been a debate among scholars and policymakers about the effects of oil exploration and production on industrial development. In Nigeria, there were many reforms resulting in an increase in crude oil production in the recent past. There is a controversy on the importance of oil production in the development of the manufacturing sector in Nigeria. Some scholars claim that oil has been a blessing to the development of the manufacturing sector, while others regard it as a curse. The objective of the study is to determine if empirical analysis supports the presence of Dutch Disease and de-industrialisation in the Nigerian manufacturing sector between 2019- 2022. The study employed data that were sourced from World Development Indicators, Nigeria Bureau of Statistics, and the Central Bank of Nigeria Statistical Bulletin on manufactured exports, manufacturing employment, agricultural employment, and service employment in line with the theory of Dutch Disease using the unit root test to establish their level of stationarity, Engel and Granger cointegration test to check their long-run relationship. Autoregressive. Distributed Lagged bound test was also used. The Vector Error Correction Model will be carried out to determine the speed of adjustment of the manufacturing export and resource movement effect. The results showed that the Nigerian manufacturing industry suffered from both direct and indirect de-industrialisation over the period. The findings also revealed that there was resource movement as labour moved away from the manufacturing sector to both the oil sector and the services sector. The study concluded that there was the presence of Dutch Disease in the manufacturing industry, and the problem of de-industrialisation led to the crowding out of manufacturing output. The study recommends that efforts should be made to diversify the Nigerian economy. Furthermore, a conducive business environment should be provided to encourage more involvement of the private sector in the agriculture and manufacturing sectors of the economy.Keywords: Dutch disease, resource movement, manufacturing sector performance, Nigeria
Procedia PDF Downloads 79348 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology
Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani
Abstract:
Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography
Procedia PDF Downloads 424347 Spatial Variation of WRF Model Rainfall Prediction over Uganda
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Triphonia Ngailo
Abstract:
Rainfall is a major climatic parameter affecting many sectors such as health, agriculture and water resources. Its quantitative prediction remains a challenge to weather forecasters although numerical weather prediction models are increasingly being used for rainfall prediction. The performance of six convective parameterization schemes, namely the Kain-Fritsch scheme, the Betts-Miller-Janjic scheme, the Grell-Deveny scheme, the Grell-3D scheme, the Grell-Fretas scheme, the New Tiedke scheme of the weather research and forecast (WRF) model regarding quantitative rainfall prediction over Uganda is investigated using the root mean square error for the March-May (MAM) 2013 season. The MAM 2013 seasonal rainfall amount ranged from 200 mm to 900 mm over Uganda with northern region receiving comparatively lower rainfall amount (200–500 mm); western Uganda (270–550 mm); eastern Uganda (400–900 mm) and the lake Victoria basin (400–650 mm). A spatial variation in simulated rainfall amount by different convective parameterization schemes was noted with the Kain-Fritsch scheme over estimating the rainfall amount over northern Uganda (300–750 mm) but also presented comparable rainfall amounts over the eastern Uganda (400–900 mm). The Betts-Miller-Janjic, the Grell-Deveny, and the Grell-3D underestimated the rainfall amount over most parts of the country especially the eastern region (300–600 mm). The Grell-Fretas captured rainfall amount over the northern region (250–450 mm) but also underestimated rainfall over the lake Victoria Basin (150–300 mm) while the New Tiedke generally underestimated rainfall amount over many areas of Uganda. For deterministic rainfall prediction, the Grell-Fretas is recommended for rainfall prediction over northern Uganda while the Kain-Fritsch scheme is recommended over eastern region.Keywords: convective parameterization schemes, March-May 2013 rainfall season, spatial variation of parameterization schemes over Uganda, WRF model
Procedia PDF Downloads 310346 Influence of Stress Relaxation and Hysteresis Effect for Pressure Garment Design
Authors: Chia-Wen Yeh, Ting-Sheng Lin, Chih-Han Chang
Abstract:
Pressure garment has been used to prevent and treat the hypertrophic scars following serious burns since 1970s. The use of pressure garment is believed to hasten the maturation process and decrease the highness of scars. Pressure garment is custom made by reducing circumferential measurement of the patient by 10%~20%, called Reduction Factor. However the exact reducing value used depends on the subjective judgment of the therapist and the feeling of patients throughout the try and error process. The Laplace Law can be applied to calculate the pressure from the dimension of the pressure garment by the circumferential measurements of the patients and the tension profile of the fabrics. The tension profile currently obtained neglects the stress relaxation and hysteresis effect within most elastic fabrics. The purpose of this study was to investigate the influence of the tension attenuation, from stress relaxation and hysteresis effect of the fabrics. Samples of pressure garment were obtained from Sunshine Foundation Organization, a nonprofit organization for burn patients in Taiwan. The wall tension profile of pressure garments were measured on a material testing system. Specimens were extended to 10% of the original length, held for 1 hour for the influence of the stress relaxation effect to take place. Then, specimens were extended to 15% of the original length for 10 seconds, then reduced to 10% to simulate donning movement for the influence of the hysteresis effect to take place. The load history was recorded. The stress relaxation effect is obvious from the load curves. The wall tension is decreased by 8.5%~10% after 60mins of holding. The hysteresis effect is obvious from the load curves. The wall tension is increased slightly, then decreased by 1.5%~2.5% and lower than stress relaxation results after 60mins of holding. The wall tension attenuation of the fabric exists due to stress relaxation and hysteresis effect. The influence of hysteresis is more than stress relaxation. These effect should be considered in order to design and evaluate the pressure of pressure garment more accurately.Keywords: hypertrophic scars, hysteresis, pressure garment, stress relaxation
Procedia PDF Downloads 512345 Investigation of Scaling Laws for Stiffness and strength in Bioinspired Glass Sponge Structures Produced by Fused Filament Fabrication
Authors: Hassan Beigi Rizi, Harold Auradou, Lamine Hattali
Abstract:
Various industries, including civil engineering, automotive, aerospace, and biomedical fields, are currently seeking novel and innovative high-performance lightweight materials to reduce energy consumption. Inspired by the structure of Euplectella Aspergillum Glass Sponges (EA-sponge), 2D unit cells were created and fabricated using a Fused Filament Fabrication (FFF) process with Polylactic acid (PLA) filaments. The stiffness and strength of bio-inspired EA-sponge lattices were investigated both experimentally and numerically under uniaxial tensile loading and are compared to three standard square lattices with diagonal struts (Designs B and C) and non-diagonal struts (Design D) reinforcements. The aim is to establish predictive scaling laws models and examine the deformation mechanisms involved. The results indicated that for the EA-sponge structure, the relative moduli and yield strength scaled linearly with relative density, suggesting that the deformation mechanism is stretching-dominated. The Finite element analysis (FEA), with periodic boundary conditions for volumetric homogenization, confirms these trends and goes beyond the experimental limits imposed by the FFF printing process. Therefore, the stretching-dominated behavior, investigated from 0.1 to 0.5 relative density, demonstrate that the study of EA-sponge structure can be exploited for the realization of square lattice topologies that are stiff and strong and have attractive potential for lightweight structural applications. However, the FFF process introduces an accuracy limitation, with approximately 10% error, making it challenging to print structures with a relative density below 0.2. Future work could focus on exploring the impact of different printing materials on the performance of EA-sponge structures.Keywords: bio-inspiration, lattice structures, fused filament fabrication, scaling laws
Procedia PDF Downloads 5344 Effect of Deficit Irrigation on Barley Yield and Water Productivity through Field Experiment and Modeling at Koga Irrigation Scheme, Amhara Region, Ethiopia
Authors: Bekalu Melis Alehegn, Dagnenet Sultan Alemu
Abstract:
The insufficiency of water is the most severe restraint for the expansion of agriculture in arid and semi-arid areas. An important strategy for increasing water productivity and improving water productivity deficit irrigation at different growth stages is important to advance the yield and Water Productivity of barley in water scarce areas. A field experiment was conducted at the Koga irrigation scheme in Ethiopia to examine barley yield response to different irrigation regimes and validate the aqua crop model. The experimental setup comprised six randomized treatments (T) with three replications for one irrigation season because of financial limitations. The irrigation regimes were selected 100%, 75%, and 50% application levels in different growth stages of gross irrigation requirements using trial and error in order to select the optimal water application level. The treatments were: no stress at all (T1), 25% stressed during all crop stages (T2), 50% stressed at all stages (T3), 50% stressed at the development stage (T4), 50% stressed at mid-stage (T5) and 50% stress at initial and late season (T6). The agronomic parameters, including canopy cover, biomass, and grain yield, were collected to compare the ground-based crop yield and the aqua crop model. The results showed that the initial and late stages and stress 25% through the whole season were the right time for practice deficit irrigation without significant yield reduction. The highest (2.62kg/m³) and the lowest (2.03 kg/m³) water productivity were found under T3 and T4, respectively. The stress of 50% at the mid-growth stage and stress 50% of the full irrigation water requirement at all growth stages significantly (α=5%) affected the canopy expansion, biomass and yield production. The aqua Crop model performed well in simulating the yield of barley for most of the treatments (R2 = 0.84 and RMSE = 0.7 t ha–¹).Keywords: aqua crop, barley, deficit irrigation, irrigation regimes, water productivity
Procedia PDF Downloads 26