Search results for: input constraints
1038 Enhancing Healthcare Delivery in Low-Income Markets: An Exploration of Wireless Sensor Network Applications
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Healthcare delivery in low-income markets is fraught with numerous challenges, including limited access to essential medical resources, inadequate healthcare infrastructure, and a significant shortage of trained healthcare professionals. These constraints lead to suboptimal health outcomes and a higher incidence of preventable diseases. This paper explores the application of Wireless Sensor Networks (WSNs) as a transformative solution to enhance healthcare delivery in these underserved regions. WSNs, comprising spatially distributed sensor nodes that collect and transmit health-related data, present opportunities to address critical healthcare needs. Leveraging WSN technology facilitates real-time health monitoring and remote diagnostics, enabling continuous patient observation and early detection of medical issues, especially in areas with limited healthcare facilities and professionals. The implementation of WSNs can enhance the overall efficiency of healthcare systems by enabling timely interventions, reducing the strain on healthcare facilities, and optimizing resource allocation. This paper highlights the potential benefits of WSNs in low-income markets, such as cost-effectiveness, increased accessibility, and data-driven decision-making. However, deploying WSNs involves significant challenges, including technical barriers like limited internet connectivity and power supply, alongside concerns about data privacy and security. Moreover, robust infrastructure and adequate training for local healthcare providers are essential for successful implementation. It further examines future directions for WSNs, emphasizing innovation, scalable solutions, and public-private partnerships. By addressing these challenges and harnessing the potential of WSNs, it is possible to revolutionize healthcare delivery and improve health outcomes in low-income markets.Keywords: wireless sensor networks (WSNs), healthcare delivery, low-Income markets, remote patient monitoring, health data security
Procedia PDF Downloads 361037 Evaluation of the Performance of Solar Stills as an Alternative for Brine Treatment Applying the Monte Carlo Ray Tracing Method
Authors: B. E. Tarazona-Romero, J. G. Ascanio-Villabona, O. Lengerke-Perez, A. D. Rincon-Quintero, C. L. Sandoval-Rodriguez
Abstract:
Desalination offers solutions for the shortage of water in the world, however, the process of eliminating salts generates a by-product known as brine, generally eliminated in the environment through techniques that mitigate its impact. Brine treatment techniques are vital to developing an environmentally sustainable desalination process. Consequently, this document evaluates three different geometric configurations of solar stills as an alternative for brine treatment to be integrated into a low-scale desalination process. The geometric scenarios to be studied were selected because they have characteristics that adapt to the concept of appropriate technology; low cost, intensive labor and material resources for local manufacturing, modularity, and simplicity in construction. Additionally, the conceptual design of the collectors was carried out, and the ray tracing methodology was applied through the open access software SolTrace and Tonatiuh. The simulation process used 600.00 rays and modified two input parameters; direct normal radiation (DNI) and reflectance. In summary, for the scenarios evaluated, the ladder-type distiller presented higher efficiency values compared to the pyramid-type and single-slope collectors. Finally, the efficiency of the collectors studied was directly related to their geometry, that is, large geometries allow them to receive a greater number of solar rays in various paths, affecting the efficiency of the device.Keywords: appropriate technology, brine treatment techniques, desalination, monte carlo ray tracing
Procedia PDF Downloads 711036 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation
Authors: Lae-Jeong Park
Abstract:
The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.Keywords: pedestrian detection, color segmentation, false positive, feature extraction
Procedia PDF Downloads 2811035 Variation in N₂ Fixation and N Contribution by 30 Groundnut (Arachis hypogaea L.) Varieties Grown in Blesbokfontein Mpumalanga Province, South Africa
Authors: Titus Y. Ngmenzuma, Cherian. Mathews, Feilx D. Dakora
Abstract:
In Africa, poor nutrient availability, particularly N and P, coupled with low soil moisture due to erratic rainfall, constitutes the major crop production constraints. Although inorganic fertilizers are an option for meeting crop nutrient requirements for increased grain yield, the high cost and scarcity of inorganic inputs make them inaccessible to resource-poor farmers in Africa. Because crops grown on such nutrient-poor soils are micronutrient deficient, incorporating N₂-fixing legumes into cropping systems can sustainably improve crop yield and nutrient accumulation in the grain. In Africa, groundnut can easily form an effective symbiosis with native soil rhizobia, leading to marked N contribution in cropping systems. In this study, field experiments were conducted at Blesbokfontein in Mpumalanga Province to assess N₂ fixation and N contribution by 30 groundnut varieties during the 2018/2019 planting season using the ¹⁵N natural abundance technique. The results revealed marked differences in shoot dry matter yield, symbiotic N contribution, soil N uptake and grain yield among the groundnut varieties. The percent N derived from fixation ranged from 37 to 44% for varieties ICGV131051 and ICGV13984. The amount of N-fixed ranged from 21 to 58 kg/ha for varieties Chinese and IS-07273, soil N uptake from 31 to 80 kg/ha for varieties IS-07947 and IS-07273, and grain yield from 193 to 393 kg/ha for varieties ICGV15033 and ICGV131096, respectively. Compared to earlier studies on groundnut in South Africa, this study has shown low N₂ fixation and N contribution to the cropping systems, possibly due to environmental factors such as low soil moisture. Because the groundnut varieties differed in their growth, symbiotic performance and grain yield, more field testing is required over a range of differing agro-ecologies to identify genotypes suitable for different cropping environmentsKeywords: ¹⁵N natural abundance, percent N derived from fixation, amount of N-fixed, grain yield
Procedia PDF Downloads 1881034 Artificial Neural Network in Ultra-High Precision Grinding of Borosilicate-Crown Glass
Authors: Goodness Onwuka, Khaled Abou-El-Hossein
Abstract:
Borosilicate-crown (BK7) glass has found broad application in the optic and automotive industries and the growing demands for nanometric surface finishes is becoming a necessity in such applications. Thus, it has become paramount to optimize the parameters influencing the surface roughness of this precision lens. The research was carried out on a 4-axes Nanoform 250 precision lathe machine with an ultra-high precision grinding spindle. The experiment varied the machining parameters of feed rate, wheel speed and depth of cut at three levels for different combinations using Box Behnken design of experiment and the resulting surface roughness values were measured using a Taylor Hobson Dimension XL optical profiler. Acoustic emission monitoring technique was applied at a high sampling rate to monitor the machining process while further signal processing and feature extraction methods were implemented to generate the input to a neural network algorithm. This paper highlights the training and development of a back propagation neural network prediction algorithm through careful selection of parameters and the result show a better classification accuracy when compared to a previously developed response surface model with very similar machining parameters. Hence artificial neural network algorithms provide better surface roughness prediction accuracy in the ultra-high precision grinding of BK7 glass.Keywords: acoustic emission technique, artificial neural network, surface roughness, ultra-high precision grinding
Procedia PDF Downloads 3051033 Design and Development of an Autonomous Beach Cleaning Vehicle
Authors: Mahdi Allaoua Seklab, Süleyman BaşTürk
Abstract:
In the quest to enhance coastal environmental health, this study introduces a fully autonomous beach cleaning machine, a breakthrough in leveraging green energy and advanced artificial intelligence for ecological preservation. Designed to operate independently, the machine is propelled by a solar-powered system, underscoring a commitment to sustainability and the use of renewable energy in autonomous robotics. The vehicle's autonomous navigation is achieved through a sophisticated integration of LIDAR and a camera system, utilizing an SSD MobileNet V2 object detection model for accurate and real-time trash identification. The SSD framework, renowned for its efficiency in detecting objects in various scenarios, is coupled with the lightweight and precise highly MobileNet V2 architecture, making it particularly suited for the computational constraints of on-board processing in mobile robotics. Training of the SSD MobileNet V2 model was conducted on Google Colab, harnessing cloud-based GPU resources to facilitate a rapid and cost-effective learning process. The model was refined with an extensive dataset of annotated beach debris, optimizing the parameters using the Adam optimizer and a cross-entropy loss function to achieve high-precision trash detection. This capability allows the machine to intelligently categorize and target waste, leading to more effective cleaning operations. This paper details the design and functionality of the beach cleaning machine, emphasizing its autonomous operational capabilities and the novel application of AI in environmental robotics. The results showcase the potential of such technology to fill existing gaps in beach maintenance, offering a scalable and eco-friendly solution to the growing problem of coastal pollution. The deployment of this machine represents a significant advancement in the field, setting a new standard for the integration of autonomous systems in the service of environmental stewardship.Keywords: autonomous beach cleaning machine, renewable energy systems, coastal management, environmental robotics
Procedia PDF Downloads 271032 The Development of an Automated Computational Workflow to Prioritize Potential Resistance Variants in HIV Integrase Subtype C
Authors: Keaghan Brown
Abstract:
The prioritization of drug resistance mutations impacting protein folding or protein-drug and protein-DNA interactions within macromolecular systems is critical to the success of treatment regimens. With a continual increase in computational tools to assess these impacts, the need for scalability and reproducibility became an essential component of computational analysis and experimental research. Here it introduce a bioinformatics pipeline that combines several structural analysis tools in a simplified workflow, by optimizing the present computational hardware and software to automatically ease the flow of data transformations. Utilizing preestablished software tools, it was possible to develop a pipeline with a set of pre-defined functions that will automate mutation introduction into the HIV-1 Integrase protein structure, calculate the gain and loss of polar interactions and calculate the change in energy of protein fold. Additionally, an automated molecular dynamics analysis was implemented which reduces the constant need for user input and output management. The resulting pipeline, Automated Mutation Introduction and Analysis (AMIA) is an open source set of scripts designed to introduce and analyse the effects of mutations on the static protein structure as well as the results of the multi-conformational states from molecular dynamic simulations. The workflow allows the user to visualize all outputs in a user friendly manner thereby successfully enabling the prioritization of variant systems for experimental validation.Keywords: automated workflow, variant prioritization, drug resistance, HIV Integrase
Procedia PDF Downloads 771031 An Automated Approach to the Nozzle Configuration of Polycrystalline Diamond Compact Drill Bits for Effective Cuttings Removal
Authors: R. Suresh, Pavan Kumar Nimmagadda, Ming Zo Tan, Shane Hart, Sharp Ugwuocha
Abstract:
Polycrystalline diamond compact (PDC) drill bits are extensively used in the oil and gas industry as well as the mining industry. Industry engineers continually improve upon PDC drill bit designs and hydraulic conditions. Optimized injection nozzles play a key role in improving the drilling performance and efficiency of these ever changing PDC drill bits. In the first part of this study, computational fluid dynamics (CFD) modelling is performed to investigate the hydrodynamic characteristics of drilling fluid flow around the PDC drill bit. An Open-source CFD software – OpenFOAM simulates the flow around the drill bit, based on the field input data. A specifically developed console application integrates the entire CFD process including, domain extraction, meshing, and solving governing equations and post-processing. The results from the OpenFOAM solver are then compared with that of the ANSYS Fluent software. The data from both software programs agree. The second part of the paper describes the parametric study of the PDC drill bit nozzle to determine the effect of parameters such as number of nozzles, nozzle velocity, nozzle radial position and orientations on the flow field characteristics and bit washing patterns. After analyzing a series of nozzle configurations, the best configuration is identified and recommendations are made for modifying the PDC bit design.Keywords: ANSYS Fluent, computational fluid dynamics, nozzle configuration, OpenFOAM, PDC dill bit
Procedia PDF Downloads 4201030 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals
Procedia PDF Downloads 3791029 Physicochemical and Microbiological Assessment of Source and Stored Domestic Water from Three Local Governments in Ile-Ife, Nigeria
Authors: Mary A. Bisi-Johnson, Kehinde A. Adediran, Saheed A. Akinola, Hamzat A. Oyelade
Abstract:
Some of the main problems man contends with are the quantity (source and amount) and quality of water in Nigeria. Scarcity leads to water being obtained from various sources and microbiological contaminations of the water may thus occur between the collection point and the point of usage. Thus, this study aims to assess the general and microbiological quality of domestic water sources and household stored water used within selected areas in Ile-Ife, South-Western part of Nigeria for microbial contaminants. Physicochemical and microbiological examination were carried out on 45 source and stored water samples collected from well and spring in three different local government areas i.e. Ife east, Ife-south, and Ife-north. Physicochemical analysis included pH value, temperature, total dissolved solid, dissolved oxygen, and biochemical oxygen demand. Microbiology involved most probable number analysis, total coliform, heterotrophic plate, faecal coliform, and streptococcus count. The result of the physicochemical analysis of samples showed anomalies compared to acceptable standards with the pH value of 7.20-8.60 for stored and 6.50-7.80 for source samples as the total dissolved solids (TDS of stored 20-70mg/L, source 352-691mg/L), dissolved oxygen (DO of stored 1.60-9.60mg/L, source 1.60-4.80mg/L), biochemical oxygen demand (BOD stored 0.80-3.60mg/L, source 0.60-5.40mg/L). General microbiological quality indicated that both stored and source samples with the exception of a sample were not within acceptable range as indicated by analysis of the MPN/100ml which ranges (stored 290-1100mg/L, source 9-1100mg/L). Apart from high counts, most samples did not meet the World Health Organization standard for drinking water with the presence of some pathogenic bacteria and fungi such as Salmonella and Aspergillus spp. To annul these constraints, standard treatment methods should be adopted to make water free from contaminants. This will help identify common and likely water related infection origin within the communities and thus help guide in terms of interventions required to prevent the general populace from such infections.Keywords: domestic, microbiology, physicochemical, quality, water
Procedia PDF Downloads 3611028 ISMARA: Completely Automated Inference of Gene Regulatory Networks from High-Throughput Data
Authors: Piotr J. Balwierz, Mikhail Pachkov, Phil Arnold, Andreas J. Gruber, Mihaela Zavolan, Erik van Nimwegen
Abstract:
Understanding the key players and interactions in the regulatory networks that control gene expression and chromatin state across different cell types and tissues in metazoans remains one of the central challenges in systems biology. Our laboratory has pioneered a number of methods for automatically inferring core gene regulatory networks directly from high-throughput data by modeling gene expression (RNA-seq) and chromatin state (ChIP-seq) measurements in terms of genome-wide computational predictions of regulatory sites for hundreds of transcription factors and micro-RNAs. These methods have now been completely automated in an integrated webserver called ISMARA that allows researchers to analyze their own data by simply uploading RNA-seq or ChIP-seq data sets and provides results in an integrated web interface as well as in downloadable flat form. For any data set, ISMARA infers the key regulators in the system, their activities across the input samples, the genes and pathways they target, and the core interactions between the regulators. We believe that by empowering experimental researchers to apply cutting-edge computational systems biology tools to their data in a completely automated manner, ISMARA can play an important role in developing our understanding of regulatory networks across metazoans.Keywords: gene expression analysis, high-throughput sequencing analysis, transcription factor activity, transcription regulation
Procedia PDF Downloads 651027 GA3C for Anomalous Radiation Source Detection
Authors: Chia-Yi Liu, Bo-Bin Xiao, Wen-Bin Lin, Hsiang-Ning Wu, Liang-Hsun Huang
Abstract:
In order to reduce the risk of radiation damage that personnel may suffer during operations in the radiation environment, the use of automated guided vehicles to assist or replace on-site personnel in the radiation environment has become a key technology and has become an important trend. In this paper, we demonstrate our proof of concept for autonomous self-learning radiation source searcher in an unknown environment without a map. The research uses GPU version of Asynchronous Advantage Actor-Critic network (GA3C) of deep reinforcement learning to search for radiation sources. The searcher network, based on GA3C architecture, has self-directed learned and improved how search the anomalous radiation source by training 1 million episodes under three simulation environments. In each episode of training, the radiation source position, the radiation source intensity, starting position, are all set randomly in one simulation environment. The input for searcher network is the fused data from a 2D laser scanner and a RGB-D camera as well as the value of the radiation detector. The output actions are the linear and angular velocities. The searcher network is trained in a simulation environment to accelerate the learning process. The well-performance searcher network is deployed to the real unmanned vehicle, Dashgo E2, which mounts LIDAR of YDLIDAR G4, RGB-D camera of Intel D455, and radiation detector made by Institute of Nuclear Energy Research. In the field experiment, the unmanned vehicle is enable to search out the radiation source of the 18.5MBq Na-22 by itself and avoid obstacles simultaneously without human interference.Keywords: deep reinforcement learning, GA3C, source searching, source detection
Procedia PDF Downloads 1141026 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity
Procedia PDF Downloads 1521025 Control of Base Isolated Benchmark using Combined Control Strategy with Fuzzy Algorithm Subjected to Near-Field Earthquakes
Authors: Hashem Shariatmadar, Mozhgansadat Momtazdargahi
Abstract:
The purpose of control structure against earthquake is to dissipate earthquake input energy to the structure and reduce the plastic deformation of structural members. There are different methods for control structure against earthquake to reduce the structure response that they are active, semi-active, inactive and hybrid. In this paper two different combined control systems are used first system comprises base isolator and multi tuned mass dampers (BI & MTMD) and another combination is hybrid base isolator and multi tuned mass dampers (HBI & MTMD) for controlling an eight story isolated benchmark steel structure. Active control force of hybrid isolator is estimated by fuzzy logic algorithms. The influences of the combined systems on the responses of the benchmark structure under the two near-field earthquake (Newhall & Elcentro) are evaluated by nonlinear dynamic time history analysis. Applications of combined control systems consisting of passive or active systems installed in parallel to base-isolation bearings have the capability of reducing response quantities of base-isolated (relative and absolute displacement) structures significantly. Therefore in design and control of irregular isolated structures using the proposed control systems, structural demands (relative and absolute displacement and etc.) in each direction must be considered separately.Keywords: base-isolated benchmark structure, multi-tuned mass dampers, hybrid isolators, near-field earthquake, fuzzy algorithm
Procedia PDF Downloads 3041024 Design of Demand Pacemaker Using an Embedded Controller
Authors: C. Bala Prashanth Reddy, B. Abhinay, C. Sreekar, D. V. Shobhana Priscilla
Abstract:
The project aims in designing an emergency pacemaker which is capable of giving shocks to a human heart which has stopped working suddenly. A pacemaker is a machine commonly used by cardiologists. This machine is used in order to shock a human’s heart back into usage. The way the heart works is that there are small cells called pacemakers sending electrical pulses to cardiac muscles that tell the heart when to pump blood. When these electrical pulses stop, the heart stops beating. When this happens, a pacemaker is used to shock the heart muscles and the pacemakers back into action. The way this is achieved is by rubbing the two panels of the pacemaker together to create an adequate electrical current, and then the heart gets back to the normal state. The project aims in designing a system which is capable of continuously displaying the heart beat and blood pressure of a person on LCD. The concerned doctor gets the heart beat and also the blood pressure details continuously through the GSM Modem in the form of SMS alerts. In case of abnormal condition, the doctor sends message format regarding the amount of electric shock needed. Automatically the microcontroller gives the input to the pacemaker which in turn gives the shock to the patient. Heart beat monitor and display system is a portable and a best replacement for the old model stethoscope which is less efficient. The heart beat rate is calculated manually using stethoscope where the probability of error is high because the heart beat rate lies in the range of 70 to 90 per minute whose occurrence is less than 1 sec, so this device can be considered as a very good alternative instead of a stethoscope.Keywords: missing R wave, PWM, demand pacemaker, heart
Procedia PDF Downloads 4821023 The Confiscation of Ill-Gotten Gains in Pollution: The Taiwan Experience and the Interaction between Economic Analysis of Law and Environmental Economics Perspectives
Authors: Chiang-Lead Woo
Abstract:
In reply to serious environmental problems, the Taiwan government quickly adjusted some articles to suit the needs of environmental protection recently, such as the amendment to article 190-1 of the Taiwan Criminal Code. The transfer of legislation comes as an improvement which canceled the limitation of ‘endangering public safety’. At the same time, the article 190-1 goes from accumulative concrete offense to abstract crime of danger. Thus, the public looks forward to whether environmental crime following the imposition of fines or penalties works efficiently in anti-pollution by the deterrent effects. However, according to the addition to article 38-2 of the Taiwan Criminal Code, the confiscation system seems controversial legislation to restrain ill-gotten gains. Most prior studies focused on comparisons with the Administrative Penalty Law and the Criminal Code in environmental issue in Taiwan; recently, more and more studies emphasize calculations on ill-gotten gains. Hence, this paper try to examine the deterrent effect in environmental crime by economic analysis of law and environmental economics perspective. This analysis shows that only if there is an extremely high probability (equal to 100 percent) of an environmental crime case being prosecuted criminally by Taiwan Environmental Protection Agency, the deterrent effects will work. Therefore, this paper suggests deliberating the confiscation system from supplementing the System of Environmental and Economic Accounting, reasonable deterrent fines, input management, real-time system for detection of pollution, and whistleblower system, environmental education, and modernization of law.Keywords: confiscation, ecosystem services, environmental crime, ill-gotten gains, the deterrent effect, the system of environmental and economic accounting
Procedia PDF Downloads 1691022 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark
Authors: B. Elshafei, X. Mao
Abstract:
The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation
Procedia PDF Downloads 1351021 Knowledge of Risk Factors and Health Implications of Fast Food Consumption among Undergraduate in Nigerian Polytechnic
Authors: Adebusoye Michael, Anthony Gloria, Fasan Temitope, Jacob Anayo
Abstract:
Background: The culture of fast food consumption has gradually become a common lifestyle in Nigeria especially among young people in urban areas, in spite of the associated adverse health consequences. The adolescent pattern of fast foods consumption and their perception of this practice, as a risk factor for Non-Communicable Diseases (NCDs), have not been fully explored. This study was designed to assess fast food consumption pattern and the perception of it as a risk factor for NCDs among undergraduates of Federal Polytechnic, Bauchi. Methodology: The study was descriptive cross-sectional in design. One hundred and eighty-five students were recruited using systematic random sampling method from the two halls of residence. A structured questionnaire was used to assess the consumption pattern of fast foods. Data collected from the questionnaires were analysed using statistical package for the social sciences (SPSS) version 16. Simple descriptive statistics, such as frequency counts and percentages were used to interpret the data. Results: The age range of respondents was 18-34 years, 58.4% were males, 93.5% singles and 51.4% of their parents were employed. The majority (100%) were aware of fast foods and (75%) agreed to its implications as NCD. Fast foods consumption distribution included meat pie (4.9%), beef roll/ sausage (2.7%), egg roll (13.5%), doughnut (16.2%), noodles(18%) and carbonated drinks (3.8%). 30.3% consumed thrice in a week and 71% attached workload to high consumption of fast food. Conclusion: It was revealed that a higher social pressure from peers, time constraints, class pressure and school programme had the strong influence on high percentages of higher institutions’ students consume fast foods and therefore nutrition educational campaigns for campus food outlets or vendors and behavioural change communication on healthy nutrition and lifestyles among young people are hereby advocated.Keywords: fast food consumption, Nigerian polytechnic, risk factors, undergraduate
Procedia PDF Downloads 4711020 Dual Duality for Unifying Spacetime and Internal Symmetry
Authors: David C. Ni
Abstract:
The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.Keywords: general relativity, quantum mechanics, string theory, duality, symmetry, correspondence, algebra, momentum-angular-momentum
Procedia PDF Downloads 3971019 Evaluating Emission Reduction Due to a Proposed Light Rail Service: A Micro-Level Analysis
Authors: Saeid Eshghi, Neeraj Saxena, Abdulmajeed Alsultan
Abstract:
Carbon dioxide (CO2) alongside other gas emissions in the atmosphere cause a greenhouse effect, resulting in an increase of the average temperature of the planet. Transportation vehicles are among the main contributors of CO2 emission. Stationary vehicles with initiated motors produce more emissions than mobile ones. Intersections with traffic lights that force the vehicles to become stationary for a period of time produce more CO2 pollution than other parts of the road. This paper focuses on analyzing the CO2 produced by the traffic flow at Anzac Parade Road - Barker Street intersection in Sydney, Australia, before and after the implementation of Light rail transport (LRT). The data are gathered during the construction phase of the LRT by collecting the number of vehicles on each path of the intersection for 15 minutes during the evening rush hour of 1 week (6-7 pm, July 04-31, 2018) and then multiplied by 4 to calculate the flow of vehicles in 1 hour. For analyzing the data, the microscopic simulation software “VISSIM” has been used. Through the analysis, the traffic flow was processed in three stages: before and after implementation of light rail train, and one during the construction phase. Finally, the traffic results were input into another software called “EnViVer”, to calculate the amount of CO2 during 1 h. The results showed that after the implementation of the light rail, CO2 will drop by a minimum of 13%. This finding provides an evidence that light rail is a sustainable mode of transport.Keywords: carbon dioxide, emission modeling, light rail, microscopic model, traffic flow
Procedia PDF Downloads 1431018 Waste-Based Surface Modification to Enhance Corrosion Resistance of Aluminium Bronze Alloy
Authors: Wilson Handoko, Farshid Pahlevani, Isha Singla, Himanish Kumar, Veena Sahajwalla
Abstract:
Aluminium bronze alloys are well known for their superior abrasion, tensile strength and non-magnetic properties, due to the co-presence of iron (Fe) and aluminium (Al) as alloying elements and have been commonly used in many industrial applications. However, continuous exposure to the marine environment will accelerate the risk of a tendency to Al bronze alloys parts failures. Although a higher level of corrosion resistance properties can be achieved by modifying its elemental composition, it will come at a price through the complex manufacturing process and increases the risk of reducing the ductility of Al bronze alloy. In this research, the use of ironmaking slag and waste plastic as the input source for surface modification of Al bronze alloy was implemented. Microstructural analysis conducted using polarised light microscopy and scanning electron microscopy (SEM) that is equipped with energy dispersive spectroscopy (EDS). An electrochemical corrosion test was carried out through Tafel polarisation method and calculation of protection efficiency against the base-material was determined. Results have indicated that uniform modified surface which is as the result of selective diffusion process, has enhanced corrosion resistance properties up to 12.67%. This approach has opened a new opportunity to access various industrial utilisations in commercial scale through minimising the dependency on natural resources by transforming waste sources into the protective coating in environmentally friendly and cost-effective ways.Keywords: aluminium bronze, waste-based surface modification, tafel polarisation, corrosion resistance
Procedia PDF Downloads 2361017 Fijian Women’s Role in Disaster Risk Management: Climate Change
Authors: Priyatma Singh, Manpreet Kaur
Abstract:
Climate change is progressively being identified as a global crisis and this has immediate repercussions for Fiji Islands due to its geographical location being prone to natural disasters. In the Pacific, it is common to find significant differences between men and women, in terms of their roles and responsibilities. In the pursuit of prudent preparedness before disasters, Fijian women’s engagement is constrained due to socially constructed roles and expectation of women here in Fiji. This vulnerability is aggravated by viewing women as victims, rather than as key people who have vital information of their society, economy, and environment, as well as useful skills, which, when recognized and used, can be effective in disaster risk reduction. The focus of this study on disaster management is to outline ways in which Fijian women can be actively engaged in disaster risk management, articulating in decision-making, negating the perceived ideology of women’s constricted roles in Fiji and unveiling social constraints that limit women’s access to practical disaster management strategic plan. This paper outlines the importance of gender mainstreaming in disaster risk reduction and the ways of mainstreaming gender based on a literature review. It analyses theoretical study of academic literature as well as papers and reports produced by various national and international institutions and explores ways to better inform and engage women for climate change per ser disaster management in Fiji. The empowerment of women is believed to be a critical element in constructing disaster resilience as women are often considered to be the designers of community resilience at the local level. Gender mainstreaming as a way of bringing a gender perspective into climate related disasters can be applied to distinguish the varying needs and capacities of women, and integrate them into climate change adaptation strategies. This study will advocate women articulation in disaster risk management, thus giving equal standing to females in Fiji and also identify the gaps and inform national and local Disaster Risk Management authorities to implement processes that enhance gender equality and women’s empowerment towards a more equitable and effective disaster practice.Keywords: disaster risk management, climate change, gender mainstreaming, women empowerment
Procedia PDF Downloads 3881016 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts
Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti
Abstract:
Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization
Procedia PDF Downloads 631015 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition
Authors: Mohamed Lotfy, Ghada Soliman
Abstract:
Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.Keywords: computer vision, pattern recognition, optical character recognition, deep learning
Procedia PDF Downloads 941014 Effectiveness of Lowering the Water Table as a Mitigation Measure for Foundation Settlement in Liquefiable Soils Using 1-g Scale Shake Table Test
Authors: Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed
Abstract:
An earthquake is an unpredictable natural disaster. It induces liquefaction, which causes considerable damage to the structure, life support, and piping systems because of ground settlement. As a result, people are incredibly concerned about how to resolve the situation. Previous researchers adopted different ground improvement techniques to reduce the settlement of the structure during earthquakes. This study evaluates the effectiveness of lowering the water table as a technique to mitigate foundation settlement in liquefiable soil. The performance will be evaluated based on foundation settlement and the reduction of excessive pore water pressure. In this study, a scaled model was prepared based on a full-scale shale table experiment conducted at the University of California, San Diego (UCSD). The model ground consists of three soil layers having a relative density of 55%, 45%, and 90%, respectively. A shallow foundation is seated over an unsaturated crust layer. After preparation of the model ground, the water table was measured to be at 45, 40, and 35 cm (from the bottom). Then, the input motions were applied for 10 seconds, with a peak acceleration of 0.25g and a constant frequency of 2.73 Hz. Based on the experimental results, the effectiveness of the lowering water table in reducing the foundation settlement and excess pore water pressure was evident. The foundation settlement was reduced from 50 mm to 5 mm. In addition, lowering the water table as a mitigation measure is a cost-effective way to decrease liquefaction-induced building settlement.Keywords: foundation settlement, ground water table, liquefaction, hake table test
Procedia PDF Downloads 1131013 The Immediate Effects of Thrust Manipulation for Thoracic Hyperkyphosis
Authors: Betul Taspinar, Eda O. Okur, Ismail Saracoglu, Ismail Okur, Ferruh Taspinar
Abstract:
Thoracic hyperkyphosis, is a well-known spinal phenomenon, refers to an excessive curvature (> 40 degrees) of the thoracic spine. The aim of this study was to explore the effectiveness of thrust manipulation on thoracic spine alignment. 31 young adults with hyperkyphosis diagnosed with Spinal Mouse® device were randomly assigned either thrust manipulation group (n=16, 11 female, 5 male) or sham manipulation group (n=15, 8 female, 7 male). Thrust and sham manipulations were performed by a blinded physiotherapist who is a certificated expert in musculoskeletal physiotherapy. Thoracic kyphosis degree was measured after the interventions via Spinal Mouse®. Wilcoxon test was used to analyse the data obtained before and after the manipulation for each group, whereas Mann-Whitney U test was used to compare the groups. The mean of baseline thoracic kyphosis degrees in thrust and sham groups were 50.69 o ± 7.73 and 48.27o ± 6.43, respectively. There was no statistically significant difference between groups in terms of initial thoracic kyphosis degrees (p=0.51). After the interventions, the mean of thoracic kyphosis degree in thrust and sham groups were measured as 44.06o ± 6.99 and 48.93o ± 6.57 respectively (p=0.03). There was no statistically significant difference between before and after interventions in sham group (p=0.33), while the mean of thoracic kyphosis degree in thrust group decreased significantly (p=0.00). Thrust manipulation can attenuate thoracic hyperkyphosis immediately in young adults by not using placebo effect. Manipulation might provide accurate proprioceptive (sensory) input to the spine joints and reduce kyphosis by restoring normal segment mobility. Therefore thoracic manipulation might be included in the physiotherapy programs to treat hyperkyphosis.Keywords: hyperkyphosis, manual therapy, spinal mouse, physiotherapy
Procedia PDF Downloads 3451012 Literature Review on the Barriers to Access Credit for Small Agricultural Producers and Policies to Mitigate Them in Developing Countries
Authors: Margarita Gáfaro, Karelys Guzmán, Paola Poveda
Abstract:
This paper establishes the theoretical aspects that explain the barriers to accessing credit for small agricultural producers in developing countries and identifies successful policy experiences to mitigate them. We will test two hypotheses. The first one is that information asymmetries, high transaction costs and high-risk exposure limit the supply of credit to small agricultural producers in developing countries. The second hypothesis is that low levels of financial education and productivity and high uncertainty about the returns of agricultural activity limit the demand for credit. To test these hypotheses, a review of the theoretical and empirical literature on access to rural credit in developing countries will be carried out. The first part of this review focuses on theoretical models that incorporate information asymmetries in the credit market and analyzes the interaction between these asymmetries and the characteristics of the agricultural sector in developing countries. Some of the characteristics we will focus on are the absence of collateral, the underdevelopment of the judicial systems and insurance markets, and the high dependence on climatic factors of production technologies. The second part of this review focuses on the determinants of credit demand by small agricultural producers, including the profitability of productive projects, security conditions, risk aversion or loss, financial education, and cognitive biases, among others. There are policies that focus on resolving these supply and demand constraints and managing to improve credit access. Therefore, another objective of this paper is to present a review of effective policies that have promoted access to credit for smallholders in the world. For this, information available in policy documents will be collected. This information will be complemented by interviews with officials in charge of the design and execution of these policies in a subset of selected countries. The information collected will be analyzed in light of the conceptual framework proposed in the first two parts of this section. The barriers to access to credit that each policy attempts to resolve and the factors that could explain its effectiveness will be identified.Keywords: agricultural economics, credit access, smallholder, developing countries
Procedia PDF Downloads 691011 Fluorescence Effect of Carbon Dots Modified with Silver Nanoparticles
Authors: Anna Piasek, Anna Szymkiewicz, Gabriela Wiktor, Jolanta Pulit-Prociak, Marcin Banach
Abstract:
Carbon dots (CDs) have great potential for application in many fields of science. They are characterized by fluorescent properties that can be manipulated. The nanomaterial has many advantages in addition to its unique properties. CDs may be obtained easily, and they undergo surface functionalization in a simple way. In addition, there is a wide range of raw materials that can be used for their synthesis. An interesting possibility is the use of numerous waste materials of natural origin. In the research presented here, the synthesis of CDs was carried out according to the principles of Green chemistry. Beet molasses was used as a natural raw material. It has a high sugar content. This makes it an excellent high-carbon precursor for obtaining CDs. To increase the fluorescence effect, we modified the surface of CDs with silver (Ag-CDs) nanoparticles. The process of obtaining CQD was based on the hydrothermal method by applying microwave radiation. Silver nanoparticles were formed via the chemical reduction method. The synthesis plans were performed on the Design of the Experimental method (DoE). Variable process parameters such as concentration of beet molasses, temperature and concentration of nanosilver were used in these syntheses. They affected the obtained properties and particle parameters. The Ag-CDs were analyzed by UV-vis spectroscopy. The fluorescence properties and selection of the appropriate excitation light wavelength were performed by spectrofluorimetry. Particle sizes were checked using the DLS method. The influence of the input parameters on the obtained results was also studied.Keywords: fluorescence, modification, nanosilver, molasses, Green chemistry, carbon dots
Procedia PDF Downloads 841010 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction
Procedia PDF Downloads 1141009 Conceptual Solution and Thermal Analysis of the Final Cooling Process of Biscuits in One Confectionary Factory in Serbia
Authors: Duško Salemović, Aleksandar Dedić, Matilda Lazić, Dragan Halas
Abstract:
The paper presents the conceptual solution for the final cooling of the chocolate dressing of biscuits in one confectionary factory in Serbia. The proposed concept solution was derived from the desired technological process of final cooling of biscuits and the required process parameters that were to be achieved, and which were an integral part of the project task. The desired process parameters for achieving proper hardening and coating formation are the exchanged amount of heat in the time unit between the two media (air and chocolate dressing), the speed of air inside the tunnel cooler, and the surface of all biscuits in contact with the air. These parameters were calculated in the paper. The final cooling of chocolate dressing on biscuits could be optimized by changing process parameters and dimensions of the tunnel cooler and looking for the appropriate values for them. The accurate temperature predictions and fluid flow analysis could be conducted by using heat balance and flow balance equations, having in mind the theory of similarity. Furthermore, some parameters were adopted from previous technology processes, such as the inlet temperature of biscuits and input air temperature. A thermal calculation was carried out, and it was demonstrated that the percentage error between the contact surface of the air and the chocolate biscuit topping, which is obtained from the heat balance and geometrically through the proposed conceptual solution, does not exceed 0.67%, which is a very good agreement. This enabled the quality of the cooling process of chocolate dressing applied on the biscuit and the hardness of its coating.Keywords: chocolate dressing, air, cooling, heat balance
Procedia PDF Downloads 79