Search results for: computational error
1247 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 631246 Quantum Entangled States and Image Processing
Authors: Sanjay Singh, Sushil Kumar, Rashmi Jain
Abstract:
Quantum registering is another pattern in computational hypothesis and a quantum mechanical framework has a few helpful properties like Entanglement. We plan to store data concerning the structure and substance of a basic picture in a quantum framework. Consider a variety of n qubits which we propose to use as our memory stockpiling. In recent years classical processing is switched to quantum image processing. Quantum image processing is an elegant approach to overcome the problems of its classical counter parts. Image storage, retrieval and its processing on quantum machines is an emerging area. Although quantum machines do not exist in physical reality but theoretical algorithms developed based on quantum entangled states gives new insights to process the classical images in quantum domain. Here in the present work, we give the brief overview, such that how entangled states can be useful for quantum image storage and retrieval. We discuss the properties of tripartite Greenberger-Horne-Zeilinger and W states and their usefulness to store the shapes which may consist three vertices. We also propose the techniques to store shapes having more than three vertices.Keywords: Greenberger-Horne-Zeilinger, image storage and retrieval, quantum entanglement, W states
Procedia PDF Downloads 3101245 Optimizing Machine Learning Through Python Based Image Processing Techniques
Authors: Srinidhi. A, Naveed Ahmed, Twinkle Hareendran, Vriksha Prakash
Abstract:
This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models.Keywords: image processing, machine learning applications, template matching, emotion detection
Procedia PDF Downloads 241244 A Numerical Method to Evaluate the Elastoplastic Material Properties of Fiber Reinforced Composite
Authors: M. Palizvan, M. H. Sadr, M. T. Abadi
Abstract:
The representative volume element (RVE) plays a central role in the mechanics of random heterogeneous materials with a view to predicting their effective properties. In this paper, a computational homogenization methodology, developed to determine effective linear elastic properties of composite materials, is extended to predict the effective nonlinear elastoplastic response of long fiber reinforced composite. Finite element simulations of volumes of different sizes and fiber volume fractures are performed for calculation of the overall response RVE. The dependencies of the overall stress-strain curves on the number of fibers inside the RVE are studied in the 2D cases. Volume averaged stress-strain responses are generated from RVEs and compared with the finite element calculations available in the literature at moderate and high fiber volume fractions. For these materials, the existence of an RVE is demonstrated for the sizes of RVE corresponding to 10–100 times the diameter of the fibers. In addition, the response of small size RVE is found anisotropic, whereas the average of all large ones leads to recover the isotropic material properties.Keywords: homogenization, periodic boundary condition, elastoplastic properties, RVE
Procedia PDF Downloads 1571243 Conceptional Design of a Hyperloop Capsule with Linear Induction Propulsion System
Authors: Ahmed E. Hodaib, Samar F. Abdel Fattah
Abstract:
High-speed transportation is a growing concern. To develop high-speed rails and to increase high-speed efficiencies, the idea of Hyperloop was introduced. The challenge is to overcome the difficulties of managing friction and air-resistance which become substantial when vehicles approach high speeds. In this paper, we are presenting the methodologies of the capsule design which got a design concept innovation award at SpaceX competition in January, 2016. MATLAB scripts are written for the levitation and propulsion calculations and iterations. Computational Fluid Dynamics (CFD) is used to simulate the air flow around the capsule considering the effect of the axial-flow air compressor and the levitation cushion on the air flow. The design procedures of a single-sided linear induction motor are analyzed in detail and its geometric and magnetic parameters are determined. A structural design is introduced and Finite Element Method (FEM) is used to analyze the stresses in different parts. The configuration and the arrangement of the components are illustrated. Moreover, comments on manufacturing are made.Keywords: high-speed transportation, hyperloop, railways transportation, single-sided linear induction Motor (SLIM)
Procedia PDF Downloads 2811242 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion
Procedia PDF Downloads 2101241 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 231240 Effects of X and + Tail-Body Configurations on Hydrodynamic Performance and Stability of an Underwater Vehicle
Authors: Kadri Koçer, Sezer Kefeli
Abstract:
This paper proposes a comparison of hydrodynamic performance and stability characteristic for an underwater vehicle which has two type of tail design, namely X and +tail-body configurations. The effects of these configurations on the underwater vehicle’s hydrodynamic performance and maneuvering characteristic will be investigated comprehensively. Hydrodynamic damping coefficients for modeling the motion of the underwater vehicles will be predicted. Additionally, forces and moments due to control surfaces will be compared using computational fluid dynamics methods. In the aviation, the X tail-body configuration is widely used for high maneuverability requirements. However, in the underwater, the + tail-body configuration is more commonly used than the X tail-body configuration for its stability characteristics. Thus it is important to see the effect and differences of the tail designs in the underwater world. For CFD analysis, the incompressible, three-dimensional, and steady Navier-Stokes equations will be used to simulate the flows. Also, k-ε Realizable turbulence model with enhanced wall treatment will be taken. Numerical results is verified with experimental results for verification. The overall goal of this study is to present the advantages and disadvantages of hydrodynamic performance and stability characteristic for X and + tail-body configurations of the underwater vehicle.Keywords: maneuverability, stability, CFD, tail configuration, hydrodynamic design
Procedia PDF Downloads 1931239 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves
Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare
Abstract:
The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve
Procedia PDF Downloads 501238 Compressible Lattice Boltzmann Method for Turbulent Jet Flow Simulations
Authors: K. Noah, F.-S. Lien
Abstract:
In Computational Fluid Dynamics (CFD), there are a variety of numerical methods, of which some depend on macroscopic model representatives. These models can be solved by finite-volume, finite-element or finite-difference methods on a microscopic description. However, the lattice Boltzmann method (LBM) is considered to be a mesoscopic particle method, with its scale lying between the macroscopic and microscopic scales. The LBM works well for solving incompressible flow problems, but certain limitations arise from solving compressible flows, particularly at high Mach numbers. An improved lattice Boltzmann model for compressible flow problems is presented in this research study. A higher-order Taylor series expansion of the Maxwell equilibrium distribution function is used to overcome limitations in LBM when solving high-Mach-number flows. Large eddy simulation (LES) is implemented in LBM to simulate turbulent jet flows. The results have been validated with available experimental data for turbulent compressible free jet flow at subsonic speeds.Keywords: compressible lattice Boltzmann method, multiple relaxation times, large eddy simulation, turbulent jet flows
Procedia PDF Downloads 2761237 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Egypt: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), CO2 emissions and gross domestic product (GDP) for Egypt using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen maximum likelihood method for co-integration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests some negative impacts of the CO2 emissions and the coal and natural gas use on the GDP. Conversely, a positive long-run causality from the electricity consumption to the GDP is found to be significant in Egypt during the period. In the short-run, some positive unidirectional causalities exist, running from the coal consumption to the GDP, and the CO2 emissions and the natural gas use. Further, the GDP and the electricity use are positively influenced by the consumption of petroleum products and the direct combustion of crude oil. Overall, the results support arguments that there are relationships among environmental quality, energy use, and economic output in both the short term and long term; however, the effects may differ due to the sources of energy, such as in the case of Egypt for the period of 1980-2010.Keywords: CO2 emissions, Egypt, energy consumption, GDP, time series analysis
Procedia PDF Downloads 6181236 Integrated Vegetable Production Planning Considering Crop Rotation Rules Using a Mathematical Mixed Integer Programming Model
Authors: Mohammadali Abedini Sanigy, Jiangang Fei
Abstract:
In this paper, a mathematical optimization model was developed to maximize the profit in a vegetable production planning problem. It serves as a decision support system that assists farmers in land allocation to crops and harvest scheduling decisions. The developed model can handle different rotation rules in two consecutive cycles of production, which is a common practice in organic production system. Moreover, different production methods of the same crop were considered in the model formulation. The main strength of the model is that it is not restricted to predetermined production periods, which makes the planning more flexible. The model is classified as a mixed integer programming (MIP) model and formulated in PYOMO -a Python package to formulate optimization models- and solved via Gurobi and CPLEX optimizer packages. The model was tested with secondary data from 'Australian vegetable growing farms', and the results were obtained and discussed with the computational test runs. The results show that the model can successfully provide reliable solutions for real size problems.Keywords: crop rotation, harvesting, mathematical model formulation, vegetable production
Procedia PDF Downloads 1931235 High-Fidelity 1D Dynamic Model of a Hydraulic Servo Valve Using 3D Computational Fluid Dynamics and Electromagnetic Finite Element Analysis
Authors: D. Henninger, A. Zopey, T. Ihde, C. Mehring
Abstract:
The dynamic performance of a 4-way solenoid operated hydraulic spool valve has been analyzed by means of a one-dimensional modeling approach capturing flow, magnetic and fluid forces, valve inertia forces, fluid compressibility, and damping. Increased model accuracy was achieved by analyzing the detailed three-dimensional electromagnetic behavior of the solenoids and flow behavior through the spool valve body for a set of relevant operating conditions, thereby allowing the accurate mapping of flow and magnetic forces on the moving valve body, in lieu of representing the respective forces by lower-order models or by means of simplistic textbook correlations. The resulting high-fidelity one-dimensional model provided the basis for specific and timely design modification eliminating experimentally observed valve oscillations.Keywords: dynamic performance model, high-fidelity model, 1D-3D decoupled analysis, solenoid-operated hydraulic servo valve, CFD and electromagnetic FEA
Procedia PDF Downloads 1801234 CFD Simulation for Flow Behavior in Boiling Water Reactor Vessel and Upper Pool under Decommissioning Condition
Authors: Y. T. Ku, S. W. Chen, J. R. Wang, C. Shih, Y. F. Chang
Abstract:
In order to respond the policy decision of non-nuclear homes, Tai Power Company (TPC) will provide the decommissioning project of Kuosheng Nuclear power plant (KSNPP) to meet the regulatory requirement in near future. In this study, the computational fluid dynamics (CFD) methodology has been employed to develop a flow prediction model for boiling water reactor (BWR) with upper pool under decommissioning stage. The model can be utilized to investigate the flow behavior as the vessel combined with upper pool and continuity cooling system. At normal operating condition, different parameters are obtained for the full fluid area, including velocity, mass flow, and mixing phenomenon in the reactor pressure vessel (RPV) and upper pool. Through the efforts of the study, an integrated simulation model will be developed for flow field analysis of decommissioning KSNPP under normal operating condition. It can be expected that a basis result for future analysis application of TPC can be provide from this study.Keywords: CFD, BWR, decommissioning, upper pool
Procedia PDF Downloads 2681233 Face Recognition Using Eigen Faces Algorithm
Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale
Abstract:
Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.Keywords: face detection, face recognition, eigen faces, algorithm
Procedia PDF Downloads 3641232 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 1921231 The Effect of Artificial Intelligence on Electric Machines and Welding
Authors: Mina Malak Zakaria Henin
Abstract:
The finite detail evaluation of magnetic fields in electromagnetic devices shows that the machine cores revel in extraordinary flux patterns consisting of alternating and rotating fields. The rotating fields are generated in different configurations variety, among circular and elliptical, with distinctive ratios between the fundamental and minor axes of the flux locus. Experimental measurements on electrical metal uncovered one-of-a-kind flux patterns that divulge distinctive magnetic losses in the samples below the test. Therefore, electric machines require unique interest throughout the core loss calculation technique to bear in mind the flux styles. In this look, a circular rotational unmarried sheet tester is employed to measure the middle losses in the electric-powered metallic pattern of M36G29. The sample becomes exposed to alternating fields, circular areas, and elliptical fields with axis ratios of zero.2, zero. Four, 0.6 and 0.8. The measured statistics changed into applied on 6-4 switched reluctance motors at 3 distinctive frequencies of interest to the industry 60 Hz, 400 Hz, and 1 kHz. The effects reveal an excessive margin of error, which can arise at some point in the loss calculations if the flux pattern difficulty is overlooked. The mistake in exceptional components of the gadget associated with considering the flux styles may be around 50%, 10%, and a couple of at 60Hz, 400Hz, and 1 kHz, respectively. The future paintings will focus on the optimization of gadget geometrical shape, which has a primary effect on the flux sample on the way to decrease the magnetic losses in system cores.Keywords: converters, electric machines, MEA (more electric aircraft), PES (power electronics systems) synchronous machine, vector control Multi-machine/ Multi-inverter, matrix inverter, Railway tractionalternating core losses, finite element analysis, rotational core losses
Procedia PDF Downloads 341230 Optimization of Biomass Components from Rice Husk Treated with Trichophyton Soudanense and Trichophyton Mentagrophyte and Effect of Yeast on the Bio-Ethanol Yield
Authors: Chukwuma S. Ezeonu, Ikechukwu N. E. Onwurah, Uchechukwu U. Nwodo, Chibuike S. Ubani, Chigozie M. Ejikeme
Abstract:
Trichophyton soudanense and Trichophyton mentagrophyte were isolated from the rice mill environment, cultured and used singly and as di-culture in the treatment of measure quantities of preheated rice husk. Optimized conditions studied showed that carboxymethylcellulase (CMCellulase) activity of 57.61 µg/ml/min was optimum for Trichophyton mentagrophyte heat pretreated rice husk crude enzymes at 50oC and 80oC respectively. Duration of 120 hours (5 days) gave the highest CMcellulase activity of 75.84 µg/ml/min for crude enzyme of Trichophyton mentagrophyte heat pretreated rice husk. However, 96 hours (4 days) duration gave maximum activity of 58.21 µg/ml/min for crude enzyme of Trichophyton soudanense heat pretreated rice husk. Highest CMCellulase activities of 67.02 µg/ml/min and 69.02 µg/ml/min at pH of 5 were recorded for crude enzymes of monocultures of Trichophyton soudanense (TS) and Trichophyton mentagrophyte (TM) heat pretreated rice husk respectively. Biomass components showed that rice husk cooled after heating followed by treatment with Trichophyton mentagrophyte gave 44.50 ± 10.90 (% ± Standard Error of Mean) cellulose as the highest yield. Maximum total lignin value of 28.90 ± 1.80 (% ± SEM) was obtained from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM). The hemicellulose content of 30.50 ± 2.12 (% ± SEM) from pre-heated rice husk treated with Trichophyton soudanense (TS); lignin value of 28.90 ± 1.80 from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM); also carbohydrate content of 16.79 ± 9.14 (% ± SEM) , reducing and non-reducing sugar values of 2.66 ± 0.45 and 14.13 ± 8.69 (% ± SEM) were all obtained from for pre- heated rice husk treated with Trichophyton mentagrophyte (TM). All the values listed above were the highest values obtained from each rice husk treatment. The pre-heated rice husk treated with Trichophyton mentagrophyte (TM) fermented with palmwine yeast gave bio-ethanol value of 11.11 ± 0.21 (% ± Standard Deviation) as the highest yield.Keywords: Trichophyton soudanense, Trichophyton mentagrophyte, biomass, bioethanol, rice husk
Procedia PDF Downloads 6851229 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 751228 CFD Analysis of a Two-Sided Windcatcher Inlet/Outlet Ducts’ Height in Ventilation Flow through a Three Dimensional Room
Authors: Amirreza Niktash, B. P. Huynh
Abstract:
A windcatcher is a structure fitted on the roof of a building for providing natural ventilation by using wind power; it exhausts the inside stale air to the outside and supplies the outside fresh air into the interior space of the building working by pressure difference between outside and inside of the building and using ventilation principles of passive stacks and wind tower, respectively. In this paper, the effect of different heights of inlet/outlets’ ducts of a two-sided windcatcher on the flow rate, flow velocity and flow pattern through a three-dimensional room fitted with the windcatcher are investigated and analysed by using RANS CFD technique and applying standard K-ε turbulence model via a commercial computational fluid dynamics (CFD) software package. The achieved results show that the inlet/outlet ducts height strongly affects flow rate, flow velocity and flow pattern especially in the living area of the room when the wind velocity is not too low. The results are confirmed by the experimental test for constructed scaled model in the laboratory and it develops the two-sided windcatcher’s performance in ventilation applications.Keywords: CFD, RANS, ventilation, windcatcher
Procedia PDF Downloads 4311227 Prospective Cohort Study on Sequential Use of Catheter with Misoprostol vs Misoprostol Alone for Second Trimester Medical Abortion
Authors: Hanna Teklu Gebregziabher
Abstract:
Background: A variety of techniques for medical termination of second-trimester pregnancy can be used, but there is no consensus about which is the best. Even though most evidence suggests the combined use of intracervical Foley catheter and vaginal misoprostol is safe, effective, and acceptable method for termination of second-trimester pregnancy, which is comparable to mifepristone-misoprostol combination regimen with lower cost and no additional maternal risks. The use of mifepristone and misoprostol alone with no other procedure is still the most common procedure in different institutions for 2nd-trimester pregnancy. Methods: A cross-sectional comparative prospective study design is employed on women who were admitted for 2nd-trimester medical abortion and medical abortion failed or if there was no change in cervical status after 24 hours of 1st dose of misoprostol. The study was conducted at St. Paulose Hospital Millennium Medical College. A sample of 44 participants in each arm was necessary to give a two-tailed test, a type 1 error of 5%, 80% statistical power, and a 1:1 ratio among groups. Thus, a total of 94 cases, 47 from each arm, were recruited. Data was entered and cleaned by using Epi-info and analyzed using SPSS version 29.0 statistical software and was presented in descriptive and tabular forms. Different variables were cross-tabulated and compared for significant differences and statistical analysis using the chi-square test and independent t-test, to conclude. Result: There was a significant difference between the two groups on induction to expulsion time and number of doses used. The mean ± SD of induction to expulsion time for those used misoprostol alone was 48.09 ± 11.86 and those who used trans-cervical catheter sequentially with misoprostol were 36.7 ±6.772. Conclusion: The use of a trans-cervical Foley catheter in conjunction with misoprostol in a sequential manner is a more effective, safe, and easily accessible procedure. In addition, the cost of utilizing the catheter is less compared to the cost of misoprostol and is readily available. As a good substitute, we advised using Trans-cervical Catether even for medical abortions performed in the second trimester.Keywords: second trimester, medical abortion, catheter, misoprostol
Procedia PDF Downloads 521226 Determinants of Success of University Industry Collaboration in the Science Academic Units at Makerere University
Authors: Mukisa Simon Peter Turker, Etomaru Irene
Abstract:
This study examined factors determining the success of University-Industry Collaboration (UIC) in the science academic units (SAUs) at Makerere University. This was prompted by concerns about weak linkages between industry and the academic units at Makerere University. The study examined institutional, relational, output, and framework factors determining the success of UIC in the science academic units at Makerere University. The study adopted a predictive cross-sectional survey design. Data was collected using a questionnaire survey from 172 academic staff from the six SAUs at Makerere University. Stratified, proportionate, and simple random sampling techniques were used to select the samples. The study used descriptive statistics and linear multiple regression analysis to analyze data. The study findings reveal a coefficient of determination (R-square) of 0.403 at a significance level of 0.000, suggesting that UIC success was 40.3% at a standardized error of estimate of 0.60188. The strength of association between Institutional factors, Relational factors, Output factors, and Framework factors, taking into consideration all interactions among the study variables, was at 64% (R= 0.635). Institutional, Relational, Output and Framework factors accounted for 34% of the variance in the level of UIC success (adjusted R2 = 0.338). The remaining variance of 66% is explained by factors other than Institutional, Relational, Output, and Framework factors. The standardized coefficient statistics revealed that Relational factors (β = 0.454, t = 5.247, p = 0.000) and Framework factors (β = 0.311, t = 3.770, p = 0.000) are the only statistically significant determinants of the success of UIC in the SAU in Makerere University. Output factors (β = 0.082, t =1.096, p = 0.275) and Institutional factors β = 0.023, t = 0.292, p = 0.771) turned out to be statistically insignificant determinants of the success of UIC in the science academic units at Makerere University. The study concludes that Relational Factors and Framework Factors positively and significantly determine the success of UIC, but output factors and institutional factors are not statistically significant determinants of UIC in the SAUs at Makerere University. The study recommends strategies to consolidate Relational and Framework Factors to enhance UIC at Makerere University and further research on the effects of Institutional and Output factors on the success of UIC in universities.Keywords: university-industry collaboration, output factors, relational factors, framework factors, institutional factors
Procedia PDF Downloads 661225 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory
Authors: Liu Canqi, Zeng Junsheng
Abstract:
This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay
Procedia PDF Downloads 701224 A Survey on Students' Intentions to Dropout and Dropout Causes in Higher Education of Mongolia
Authors: D. Naranchimeg, G. Ulziisaikhan
Abstract:
Student dropout problem has not been recently investigated within the Mongolian higher education. A student dropping out is a personal decision, but it may cause unemployment and other social problems including low quality of life because students who are not completed a degree cannot find better-paid jobs. The research aims to determine percentage of at-risk students, and understand reasons for dropouts and to find a way to predict. The study based on the students of the Mongolian National University of Education including its Arkhangai branch school, National University of Mongolia, Mongolian University of Life Sciences, Mongolian University of Science and Technology, Mongolian National University of Medical Science, Ikh Zasag International University, and Dornod University. We conducted the paper survey by method of random sampling and have surveyed about 100 students per university. The margin of error - 4 %, confidence level -90%, and sample size was 846, but we excluded 56 students from this study. Causes for exclusion were missing data on the questionnaire. The survey has totally 17 questions, 4 of which was demographic questions. The survey shows that 1.4% of the students always thought to dropout whereas 61.8% of them thought sometimes. Also, results of the research suggest that students’ dropouts from university do not have relationships with their sex, marital and social status, and peer and faculty climate, whereas it slightly depends on their chosen specialization. Finally, the paper presents the reasons for dropping out provided by the students. The main two reasons for dropouts are personal reasons related with choosing wrong study program, not liking the course they had chosen (50.38%), and financial difficulties (42.66%). These findings reveal the importance of early prevention of dropout where possible, combined with increased attention to high school students in choosing right for them study program, and targeted financial support for those who are at risk.Keywords: at risk students, dropout, faculty climate, Mongolian universities, peer climate
Procedia PDF Downloads 4031223 Numerical Analysis on the Effect of Abrasive Parameters on Wall Shear Stress and Jet Exit Kinetic Energy
Authors: D. Deepak, N. Yagnesh Sharma
Abstract:
Abrasive Water Jet (AWJ) machining is a relatively new nontraditional machine tool used in machining of fiber reinforced composite. The quality of machined surface depends on jet exit kinetic energy which depends on various operating and material parameters. In the present work the effect abrasive parameters such as its size, concentration and type on jet kinetic energy is investigated using computational fluid dynamics (CFD). In addition, the effect of these parameters on wall shear stress developed inside the nozzle is also investigated. It is found that for the same operating parameters, increase in the abrasive volume fraction (concentration) results in significant decrease in the wall shear stress as well as the jet exit kinetic energy. Increase in the abrasive particle size results in marginal decrease in the jet exit kinetic energy. Numerical simulation also indicates that garnet abrasives produce better jet exit kinetic energy than aluminium oxide and silicon carbide.Keywords: abrasive water jet machining, jet kinetic energy, operating pressure, wall shear stress, Garnet abrasive
Procedia PDF Downloads 3811222 Enhancing the Bionic Eye: A Real-time Image Optimization Framework to Encode Color and Spatial Information Into Retinal Prostheses
Authors: William Huang
Abstract:
Retinal prostheses are currently limited to low resolution grayscale images that lack color and spatial information. This study develops a novel real-time image optimization framework and tools to encode maximum information to the prostheses which are constrained by the number of electrodes. One key idea is to localize main objects in images while reducing unnecessary background noise through region-contrast saliency maps. A novel color depth mapping technique was developed through MiniBatchKmeans clustering and color space selection. The resulting image was downsampled using bicubic interpolation to reduce image size while preserving color quality. In comparison to current schemes, the proposed framework demonstrated better visual quality in tested images. The use of the region-contrast saliency map showed improvements in efficacy up to 30%. Finally, the computational speed of this algorithm is less than 380 ms on tested cases, making real-time retinal prostheses feasible.Keywords: retinal implants, virtual processing unit, computer vision, saliency maps, color quantization
Procedia PDF Downloads 1571221 Complex Cooling Approach in Microchannel Heat Exchangers Using Solid and Hollow Fins
Authors: Nahum Yustus Godi
Abstract:
A three-dimensional numerical optimisation of combined microchannels with constructal solid, half hollow, and hollow circular fins is documented in this paper. The technique seeks to minimize peak temperature in the entire volume of the microchannel heat sink. The volume and axial length were all fixed, while the width of the microchannel could morph. High-density heat flux was applied at the bottom wall of the microchannel. The coolant employed to remove the heat deposited at the bottom surface of the microchannel was a single-phase fluid (water) in a forced convection laminar condition, and heat transfer was a conjugate problem. The unit cell symmetrical computation domain was discretised, and governing equations were solved using computational fluid dynamic (CFD) code. The results reveal that the combined microchannel with hollow circular fins and solid fins performed better at different Reynolds numbers. The numerical study was validated for the single microchannel without fins and found to be in good agreement with previous studies.Keywords: constructal fins, complex heat exchangers, cooling technique, numerical optimisation
Procedia PDF Downloads 2281220 Comparative Study of Arch Bridges with Varying Rise to Span Ratio
Authors: Tauhidur Rahman, Arnab Kumar Sinha
Abstract:
This paper presents a comparative study of Arch bridges based on their varying rise to span ratio. The comparison is done between different steel Arch bridges which have variable span length and rise to span ratio keeping the same support condition. The aim of our present study is to select the optimum value of rise to span ratio of Arch bridge as the cost of the Arch bridge increases with the increasing of the rise. In order to fulfill the objective, several rise to span ratio have been considered for same span of Arch bridge and various structural parameters such as Bending moment, shear force etc have been calculated for different model. A comparative study has been done for several Arch bridges finally to select the optimum rise to span ratio of the Arch bridges. In the present study, Finite Element model for medium to long span, with different rise to span ratio have been modeled and are analyzed with the help of a Computational Software named MIDAS Civil to evaluate the results such as Bending moments, Shear force, displacements, Stresses, influence line diagrams, critical loads. In the present study, 60 models of Arch bridges for 80 to 120 m span with different rise to span ratio has been thoroughly investigated.Keywords: arch bridge, analysis, comparative study, rise to span ratio
Procedia PDF Downloads 5391219 The Effect of Velocity Increment by Blockage Factor on Savonius Hydrokinetic Turbine Performance
Authors: Thochi Seb Rengma, Mahendra Kumar Gupta, P. M. V. Subbarao
Abstract:
Hydrokinetic turbines can be used to produce power in inaccessible villages located near rivers. The hydrokinetic turbine uses the kinetic energy of the water and maybe put it directly into the natural flow of water without dams. For off-grid power production, the Savonius-type vertical axis turbine is the easiest to design and manufacture. This proposal uses three-dimensional computational fluid dynamics (CFD) simulations to measure the considerable interaction and complexity of turbine blades. Savonius hydrokinetic turbine (SHKT) performance is affected by a blockage in the river, canals, and waterways. Putting a large object in a water channel causes water obstruction and raises local free stream velocity. The blockage correction factor or velocity increment measures the impact of velocity on the performance. SHKT performance is evaluated by comparing power coefficient (Cp) with tip-speed ratio (TSR) at various blockage ratios. The maximum Cp was obtained at a TSR of 1.1 with a blockage ratio of 45%, whereas TSR of 0.8 yielded the highest Cp without blockage. The greatest Cp of 0.29 was obtained with a 45% blockage ratio compared to a Cp max of 0.18 without a blockage.Keywords: savonius hydrokinetic turbine, blockage ratio, vertical axis turbine, power coefficient
Procedia PDF Downloads 1391218 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information
Authors: A. Preetha Priyadharshini, S. B. M. Priya
Abstract:
In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information
Procedia PDF Downloads 817