Search results for: output mode
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3872

Search results for: output mode

212 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 231
211 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.

Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation

Procedia PDF Downloads 94
210 Phonological Encoding and Working Memory in Kannada Speaking Adults Who Stutter

Authors: Nirmal Sugathan, Santosh Maruthy

Abstract:

Background: A considerable number of studies have evidenced that phonological encoding (PE) and working memory (WM) skills operate differently in adults who stutter (AWS). In order to tap these skills, several paradigms have been employed such as phonological priming, phoneme monitoring, and nonword repetition tasks. This study, however, utilizes a word jumble paradigm to assess both PE and WM using different modalities and this may give a better understanding of phonological processing deficits in AWS. Aim: The present study investigated PE and WM abilities in conjunction with lexical access in AWS using jumbled words. The study also aimed at investigating the effect of increase in cognitive load on phonological processing in AWS by comparing the speech reaction time (SRT) and accuracy scores across various syllable lengths. Method: Participants were 11 AWS (Age range=19-26) and 11 adults who do not stutter (AWNS) (Age range=19-26) matched for age, gender and handedness. Stimuli: Ninety 3-, 4-, and 5-syllable jumbled words (JWs) (n=30 per syllable length category) constructed from Kannada words served as stimuli for jumbled word paradigm. In order to generate jumbled words (JWs), the syllables in the real words were randomly transpositioned. Procedures: To assess PE, the JWs were presently visually using DMDX software and for WM task, JWs were presented through auditory mode through headphones. The participants were asked to silently manipulate the jumbled words to form a Kannada real word and verbally respond once. The responses for both tasks were audio recorded using record function in DMDX software and the recorded responses were analyzed using PRAAT software to calculate the SRT. Results: SRT: Mann-Whitney test results demonstrated that AWS performed significantly slower on both tasks (p < 0.001) as indicated by increased SRT. Also, AWS presented with increased SRT on both the tasks in all syllable length conditions (p < 0.001). Effect of syllable length: Wilcoxon signed rank test was carried out revealed that, on task assessing PE, the SRT of 4syllable JWs were significantly higher in both AWS (Z= -2.93, p=.003) and AWNS (Z= -2.41, p=.003) when compared to 3-syllable words. However, the findings for 4- and 5-syllable words were not significant. Task Accuracy: The accuracy scores were calculated for three syllable length conditions for both PE and PM tasks and were compared across the groups using Mann-Whitney test. The results indicated that the accuracy scores of AWS were significantly below that of AWNS in all the three syllable conditions for both the tasks (p < 0.001). Conclusion: The above findings suggest that PE and WM skills are compromised in AWS as indicated by increased SRT. Also, AWS were progressively less accurate in descrambling JWs of increasing syllable length and this may be interpreted as, rather than existing as a uniform deficiency, PE and WM deficits emerge when the cognitive load is increased. AWNS exhibited increased SRT and increased accuracy for JWs of longer syllable length whereas AWS was not benefited from increasing the reaction time, thus AWS had to compromise for both SRT and accuracy while solving JWs of longer syllable length.

Keywords: adults who stutter, phonological ability, working memory, encoding, jumbled words

Procedia PDF Downloads 211
209 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 190
208 The Effect of Mindfulness-Based Interventions for Individuals with Tourette Syndrome: A Scoping Review

Authors: Ilana Singer, Anastasia Lučić, Julie Leclerc

Abstract:

Introduction: Tics, characterized by repetitive, sudden, non-voluntary motor movements or vocalizations, are prevalent in chronic tic disorder (CT) and Tourette Syndrome (TS). These neurodevelopmental disorders often coexist with various psychiatric conditions, leading to challenges and reduced quality of life. While medication in conjunction with behavioral interventions, such as Habit Reversal Training (HRT), Exposure Response Prevention (ERP), and Comprehensive Behavioral Intervention for Tics (CBIT), has shown efficacy, a significant proportion of patients experience persistent tics. Thus, innovative treatment approaches are necessary to improve therapeutic outcomes, such as mindfulness-based approaches. Nonetheless, the effectiveness of mindfulness-based interventions in the context of CT and TS remains understudied. Objective: The objective of this scoping review is to provide an overview of the current state of research on mindfulness-based interventions for CT and TS, identify knowledge and evidence gaps, discuss the effectiveness of mindfulness-based interventions with other treatment options, and discuss implications for clinical practice and policy development. Method: Using guidelines from Peters (2020) and the PRISMA-ScR, a scoping review was conducted. Multiple electronic databases were searched from inception until June 2023, including MEDLINE, EMBASE, PsychInfo, Global Health, PubMed, Web of Science, and Érudit. Inclusion criteria were applied to select relevant studies, and data extraction was independently performed by two reviewers. Results: Five papers were included in the study. Firstly, we found that mindfulness interventions were found to be effective in reducing anxiety and depression while enhancing overall well-being in individuals with tics. Furthermore, the review highlighted the potential role of mindfulness in enhancing functional connectivity within the Default Mode Network (DMN) as a compensatory function in TS patients. This suggests that mindfulness interventions may complement and support traditional therapeutic approaches, particularly HRT, by positively influencing brain networks associated with tic regulation and control. Conclusion: This scoping review contributes to the understanding of the effectiveness of mindfulness-based interventions in managing CT and TS. By identifying research gaps, this review can guide future investigations and interventions to improve outcomes for individuals with CT or TS. Overall, these findings emphasize the potential benefits of incorporating mindfulness-based interventions as a smaller subset within comprehensive treatment strategies. However, it is essential to acknowledge the limitations of this scoping review, such as the exclusion of a pre-established protocol and the limited number of studies available for inclusion. Further research and clinical exploration are necessary to better understand the specific mechanisms and optimal integration of mindfulness-based interventions with existing behavioral interventions for this population.

Keywords: scoping reviews, Tourette Syndrome, tics, mindfulness-based, therapy, intervention

Procedia PDF Downloads 56
207 Challenges in the Last Mile of the Global Guinea Worm Eradication Program: A Systematic Review

Authors: Getahun Lemma

Abstract:

Introduction Guinea Worm Disease (GWD), also known as dracunculiasisis, is one of the oldest diseases in the history of mankind. Dracunculiasis is caused by a parasitic nematode, Dracunculus medinensis. Infection is acquired by drinking contaminated water with copepods containing infective Guinea Worm (GW) larvae). Almost one year after the infection, the worm usually emerges out through the skin on a lower, causing severe pain and disabilities. Although there is no effective drug or vaccine against the disease, the chain of transmission can be effectively prevented with simple and cost effective public health measures. Death due to dracunculiasis is very rare. However, it results in a wide range of physical, social and economic sequels. The disease is usually common in the rural, remote places of Sub-Saharan African countries among the marginalized societies. Currently, GWD is one of the neglected tropical diseases, which is on the verge of eradication. The global Guinea Worm Eradication Program (GWEP) was started in 1980. Since then, the program has achieved a tremendous success in reducing the global burden and number of GW case from 3.5 million to only 28 human cases at the end of 2018. However, it has recently been shown that not only humans can become infected, with a total of 1,105 animal infections have been reported at the end of 2018. Therefore, the objective of this study was to identify the existing challenges in the last mile of the GWEP in order To inform Policy makers and stakeholders on potential measures to finally achieve eradication. Method Systematic literature review on articles published from January 1, 2000 until May 30, 2019. Papers listed in Cochrane Library, Google Scholar, ProQuest PubMed and Web of Science databases were searched and reviewed. Results Twenty-five articles met inclusion criteria of the study and were selected for analysis. Hence, relevant data were extracted, grouped and descriptively analyzed. Results showed the main challenges complicating the last mile of global GWEP: 1. Unusual mode of transmission; 2. Rising animal Guinea Worm infection; 3. Suboptimal surveillance; 4. Insecurity; 5. Inaccessibility; 6. Inadequate safe water points; 7. Migration; 8. Poor case containment measures, 9. Ecological changes; and 10. New geographic foci of the disease. Conclusion This systematic review identified that most of the current challenges in the GWEP have been present since the start of the campaign. However, the recent change in epidemiological patterns and nature of GWD in the last remaining endemic countries illustrates a new twist in the global GWEP. Considering the complex nature of the current challenges, there seems to be a need for a more coordinated and multidisciplinary approach of GWD prevention and control measures in the last mile of the campaign. These new strategies would help to make history by eradicating dracunculiasis as the first ever parasitic disease.

Keywords: dracunculiasis, eradication program, guinea worm, last mile

Procedia PDF Downloads 100
206 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 94
205 Acoustic Energy Harvesting Using Polyvinylidene Fluoride (PVDF) and PVDF-ZnO Piezoelectric Polymer

Authors: S. M. Giripunje, Mohit Kumar

Abstract:

Acoustic energy that exists in our everyday life and environment have been overlooked as a green energy that can be extracted, generated, and consumed without any significant negative impact to the environment. The harvested energy can be used to enable new technology like wireless sensor networks. Technological developments in the realization of truly autonomous MEMS devices and energy storage systems have made acoustic energy harvesting (AEH) an increasingly viable technology. AEH is the process of converting high and continuous acoustic waves from the environment into electrical energy by using an acoustic transducer or resonator. AEH is not popular as other types of energy harvesting methods since sound waves have lower energy density and such energy can only be harvested in very noisy environment. However, the energy requirements for certain applications are also correspondingly low and also there is a necessity to observe the noise to reduce noise pollution. So the ability to reclaim acoustic energy and store it in a usable electrical form enables a novel means of supplying power to relatively low power devices. A quarter-wavelength straight-tube acoustic resonator as an acoustic energy harvester is introduced with polyvinylidene fluoride (PVDF) and PVDF doped with ZnO nanoparticles, piezoelectric cantilever beams placed inside the resonator. When the resonator is excited by an incident acoustic wave at its first acoustic eigen frequency, an amplified acoustic resonant standing wave is developed inside the resonator. The acoustic pressure gradient of the amplified standing wave then drives the vibration motion of the PVDF piezoelectric beams, generating electricity due to the direct piezoelectric effect. In order to maximize the amount of the harvested energy, each PVDF and PVDF-ZnO piezoelectric beam has been designed to have the same structural eigen frequency as the acoustic eigen frequency of the resonator. With a single PVDF beam placed inside the resonator, the harvested voltage and power become the maximum near the resonator tube open inlet where the largest acoustic pressure gradient vibrates the PVDF beam. As the beam is moved to the resonator tube closed end, the voltage and power gradually decrease due to the decreased acoustic pressure gradient. Multiple piezoelectric beams PVDF and PVDF-ZnO have been placed inside the resonator with two different configurations: the aligned and zigzag configurations. With the zigzag configuration which has the more open path for acoustic air particle motions, the significant increases in the harvested voltage and power have been observed. Due to the interruption of acoustic air particle motion caused by the beams, it is found that placing PVDF beams near the closed tube end is not beneficial. The total output voltage of the piezoelectric beams increases linearly as the incident sound pressure increases. This study therefore reveals that the proposed technique used to harvest sound wave energy has great potential of converting free energy into useful energy.

Keywords: acoustic energy, acoustic resonator, energy harvester, eigenfrequency, polyvinylidene fluoride (PVDF)

Procedia PDF Downloads 353
204 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications

Authors: Gema M. Rodado, Jose M. Olavarrieta

Abstract:

Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.

Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests

Procedia PDF Downloads 89
203 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 131
202 Exploring Barriers to Quality of Care in South African Midwifery Obstetric Units: The Perspective of Nurses and Midwives

Authors: J. Dutton, L. Knight

Abstract:

Achieving quality and respectful maternal health care is part of the global agenda to improve reproductive health and achieve universal reproductive rights. Barriers to quality of care in South African maternal health facilities exist at both systemic and individual levels. Addition to this, the normalization of gender violence within South Africa has a large impact on people seeking health care as well as those who provide care within health facilities. The hierarchical environment of South Africa’s public health system penalizes both patients and providers who battle to assume any assessable power. This paper explores how systemic and individual level barriers to quality of care affect the midwifery profession within South African maternal health services and create, at times, an environment of enmity rather than care. This paper analyzes and discusses the data collected from in-depth, semi-structured interviews with nurses and midwives at three maternal health facilities in South Africa. This study has taken a holistic approach to understand the realities of nurses and midwives in order to explore the ways in which experience informs their practice and treatment of pregnant women. Through collecting and analyzing narratives, linkages between nurses and midwives day-to-day and historical experiences and disrespectful care have been made. Findings from this study show that barriers to quality of care take form in complex and interrelated ways. The physical structure of the health facility, human resource shortages, and the current model of maternal health care, which often lacks a person-centered approach, is entangled within personal beliefs and attitudes of what it means to be a midwife to create an environment that is often not conducive to a positive birthing experience. This entanglement sits within a society of high rates of violence, inequality, and poverty. Having teased out the nuances of each of these barriers and the multiple ways they reinforce each other, the findings of this paper demonstrate that birth, and the work of a midwife, are situated in a mode of discipline and punishment within this context. For analytical purposes, this paper has broken down the individual barriers to quality care and discusses the current and historical significance before returning to the interrelated forms in which barriers to quality maternal health care manifest. In conclusion this paper questions the role of agency in the ability to subvert systemic barriers to quality care and ideas around shifting attitudes and beliefs of and about midwives. International and local policies and guidelines have a role to play in realizing such shifts, however, as this paper suggests, when policy does not speak to the local context there is the risk of it contributing to frustrations and impeding the path to quality and respectful maternal health care.

Keywords: disrespect and abuse in childbirth, midwifery, South African maternal health care, quality of care

Procedia PDF Downloads 135
201 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements

Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga

Abstract:

Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.

Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform

Procedia PDF Downloads 360
200 Exploring Valproic Acid (VPA) Analogues Interactions with HDAC8 Involved in VPA Mediated Teratogenicity: A Toxicoinformatics Analysis

Authors: Sakshi Piplani, Ajit Kumar

Abstract:

Valproic acid (VPA) is the first synthetic therapeutic agent used to treat epileptic disorders, which account for affecting nearly 1% world population. Teratogenicity caused by VPA has prompted the search for next generation drug with better efficacy and lower side effects. Recent studies have posed HDAC8 as direct target of VPA that causes the teratogenic effect in foetus. We have employed molecular dynamics (MD) and docking simulations to understand the binding mode of VPA and their analogues onto HDAC8. A total of twenty 3D-structures of human HDAC8 isoforms were selected using BLAST-P search against PDB. Multiple sequence alignment was carried out using ClustalW and PDB-3F07 having least missing and mutated regions was selected for study. The missing residues of loop region were constructed using MODELLER and energy was minimized. A set of 216 structural analogues (>90% identity) of VPA were obtained from Pubchem and ZINC database and their energy was optimized with Chemsketch software using 3-D CHARMM-type force field. Four major neurotransmitters (GABAt, SSADH, α-KGDH, GAD) involved in anticonvulsant activity were docked with VPA and its analogues. Out of 216 analogues, 75 were selected on the basis of lower binding energy and inhibition constant as compared to VPA, thus predicted to have anti-convulsant activity. Selected hHDAC8 structure was then subjected to MD Simulation using licenced version YASARA with AMBER99SB force field. The structure was solvated in rectangular box of TIP3P. The simulation was carried out with periodic boundary conditions and electrostatic interactions and treated with Particle mesh Ewald algorithm. pH of system was set to 7.4, temperature 323K and pressure 1atm respectively. Simulation snapshots were stored every 25ps. The MD simulation was carried out for 20ns and pdb file of HDAC8 structure was saved every 2ns. The structures were analysed using castP and UCSF Chimera and most stabilized structure (20ns) was used for docking study. Molecular docking of 75 selected VPA-analogues with PDB-3F07 was performed using AUTODOCK4.2.6. Lamarckian Genetic Algorithm was used to generate conformations of docked ligand and structure. The docking study revealed that VPA and its analogues have more affinity towards ‘hydrophobic active site channel’, due to its hydrophobic properties and allows VPA and their analogues to take part in van der Waal interactions with TYR24, HIS42, VAL41, TYR20, SER138, TRP137 while TRP137 and SER138 showed hydrogen bonding interaction with VPA-analogues. 14 analogues showed better binding affinity than VPA. ADMET SAR server was used to predict the ADMET properties of selected VPA analogues for predicting their druggability. On the basis of ADMET screening, 09 molecules were selected and are being used for in-vivo evaluation using Danio rerio model.

Keywords: HDAC8, docking, molecular dynamics simulation, valproic acid

Procedia PDF Downloads 217
199 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 169
198 Investigating the Urban Heat Island Phenomenon in A Desert City Aiming at Sustainable Buildings

Authors: Afifa Mohammed, Gloria Pignatta, Mattheos Santamouris, Evangelia Topriska

Abstract:

Climate change is one of the global challenges that is exacerbated by the rapid growth of urbanizations. Urban Heat Island (UHI) phenomenon can be considered as an effect of the urbanization and it is responsible together with the Climate change of the overheating of urban cities and downtowns. The purpose of this paper is to quantify and perform analysis of UHI Intensity in Dubai, United Arab Emirates (UAE), through checking the relationship between the UHI and different meteorological parameters (e.g., temperature, winds speed, winds direction). Climate data were collected from three meteorological stations in Dubai (e.g., Dubai Airport - Station 1, Al-Maktoum Airport - Station 2 and Saih Al-Salem - Station 3) for a period of five years (e.g., 2014 – 2018) based upon hourly rates, and following clustering technique as one of the methodology tools of measurements. The collected data of each station were divided into six clusters upon the winds directions, either from the seaside or from the desert side, or from the coastal side which is in between both aforementioned winds sources, to investigate the relationship between temperature degrees and winds speed values through UHI measurements for Dubai Airport - Station 1 compared with the same of Al-Maktoum Airport - Station 2. In this case, the UHI value is determined by the temperature difference of both stations, where Station 1 is considered as located in an urban area and Station 2 is considered as located in a suburban area. The same UHI calculations has been applied for Al-Maktoum Airport - Station 2 and Saih Salem - Station 3 where Station 2 is considered as located in an urban area and Station 3 is considered as located in a suburban area. The performed analysis aims to investigate the relation between the two environmental parameters (e.g., Temperature and Winds Speed) and the Urban Heat Island (UHI) intensity when the wind comes from the seaside, from the desert, and the remaining directions. The analysis shows that the correlation between the temperatures with both UHI intensity (e.g., temperature difference between Dubai Airport - Station 1 and Saih Al-Salem - Station 3 and between Al-Maktoum Airport - Station 2 and Saih Al-Salem - Station 3 (through station 1 & 2) is strong and has a negative relationship when the wind is coming from the seaside comparing between the two stations 1 and 2, while the relationship is almost zero (no relation) when the wind is coming from the desert side. The relation is independent between the two parameters, e.g., temperature and UHI, on Station 2, during the same procedures, the correlation between the urban heat island UHI phenomenon and wind speed is weak for both stations when wind direction is coming from the seaside comparing the station 1 and 2, while it was found that there’s no relationship between urban heat island phenomenon and wind speed when wind direction is coming from desert side. The conclusion could be summarized saying that the wind coming from the seaside or from the desert side have a different effect on UHI, which is strongly affected by meteorological parameters. The output of this study will enable more determination of UHI phenomenon under desert climate, which will help to inform about the UHI phenomenon and intensity and extract recommendations in two main categories such as planning of new cities and designing of buildings.

Keywords: meteorological data, subtropical desert climate, urban climate, urban heat island (UHI)

Procedia PDF Downloads 112
197 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 93
196 Interface Fracture of Sandwich Composite Influenced by Multiwalled Carbon Nanotube

Authors: Alak Kumar Patra, Nilanjan Mitra

Abstract:

Higher strength to weight ratio is the main advantage of sandwich composite structures. Interfacial delamination between the face sheet and core is a major problem in these structures. Many research works are devoted to improve the interfacial fracture toughness of composites majorities of which are on nano and laminated composites. Work on influence of multiwalled carbon nano-tubes (MWCNT) dispersed resin system on interface fracture of glass-epoxy PVC core sandwich composite is extremely limited. Finite element study is followed by experimental investigation on interface fracture toughness of glass-epoxy (G/E) PVC core sandwich composite with and without MWCNT. Results demonstrate an improvement in interface fracture toughness values (Gc) of samples with a certain percentages of MWCNT. In addition, dispersion of MWCNT in epoxy resin through sonication followed by mixing of hardener and vacuum resin infusion (VRI) technology used in this study is an easy and cost effective methodology in comparison to previously adopted other methods limited to laminated composites. The study also identifies the optimum weight percentage of MWCNT addition in the resin system for maximum performance gain in interfacial fracture toughness. The results agree with finite element study, high-resolution transmission electron microscope (HRTEM) analysis and fracture micrograph of field emission scanning electron microscope (FESEM) investigation. Interface fracture toughness (GC) of the DCB sandwich samples is calculated using the compliance calibration (CC) method considering the modification due to shear. Compliance (C) vs. crack length (a) data of modified sandwich DCB specimen is fitted to a power function of crack length. The calculated mean value of the exponent n from the plots of experimental results is 2.22 and is different from the value (n=3) prescribed in ASTM D5528-01for mode 1 fracture toughness of laminate composites (which is the basis for modified compliance calibration method). Differentiating C with respect to crack length (a) and substituting it in the expression GC provides its value. The research demonstrates improvement of 14.4% in peak load carrying capacity and 34.34% in interface fracture toughness GC for samples with 1.5 wt% MWCNT (weight % being taken with respect to weight of resin) in comparison to samples without MWCNT. The paper focuses on significant improvement in experimentally determined interface fracture toughness of sandwich samples with MWCNT over the samples without MWCNT using much simpler method of sonication. Good dispersion of MWCNT was observed in HRTEM with 1.5 wt% MWCNT addition in comparison to other percentages of MWCNT. FESEM studies have also demonstrated good dispersion and fiber bridging of MWCNT in resin system. Ductility is also observed to be higher for samples with MWCNT in comparison to samples without.

Keywords: carbon nanotube, epoxy resin, foam, glass fibers, interfacial fracture, sandwich composite

Procedia PDF Downloads 285
195 Arc Plasma Thermochemical Preparation of Coal to Effective Combustion in Thermal Power Plants

Authors: Vladimir Messerle, Alexandr Ustimenko, Oleg Lavrichshev

Abstract:

This work presents plasma technology for solid fuel ignition and combustion. Plasma activation promotes more effective and environmentally friendly low-rank coal ignition and combustion. To realise this technology at coal fired power plants plasma-fuel systems (PFS) were developed. PFS improve efficiency of power coals combustion and decrease harmful emission. PFS is pulverized coal burner equipped with arc plasma torch. Plasma torch is the main element of the PFS. Plasma forming gas is air. It is blown through the electrodes forming plasma flame. Temperature of this flame is varied from 5000 to 6000 K. Plasma torch power is varied from 100 to 350 kW and geometrical sizes are the following: the height is 0.4-0.5 m and diameter is 0.2-0.25 m. The base of the PFS technology is plasma thermochemical preparation of coal for burning. It consists of heating of the pulverized coal and air mixture by arc plasma up to temperature of coal volatiles release and char carbon partial gasification. In the PFS coal-air mixture is deficient in oxygen and carbon is oxidised mainly to carbon monoxide. As a result, at the PFS exit a highly reactive mixture is formed of combustible gases and partially burned char particles, together with products of combustion, while the temperature of the gaseous mixture is around 1300 K. Further mixing with the air promotes intensive ignition and complete combustion of the prepared fuel. PFS have been tested for boilers start up and pulverized coal flame stabilization in different countries at power boilers of 75 to 950 t/h steam productivity. They were equipped with different types of pulverized coal burners (direct flow, muffle and swirl burners). At PFS testing power coals of all ranks (lignite, bituminous, anthracite and their mixtures) were incinerated. Volatile content of them was from 4 to 50%, ash varied from 15 to 48% and heat of combustion was from 1600 to 6000 kcal/kg. To show the advantages of the plasma technology before conventional technologies of coal combustion numerical investigation of plasma ignition, gasification and thermochemical preparation of a pulverized coal for incineration in an experimental furnace with heat capacity of 3 MW was fulfilled. Two computer-codes were used for the research. The computer simulation experiments were conducted for low-rank bituminous coal of 44% ash content. The boiler operation has been studied at the conventional mode of combustion and with arc plasma activation of coal combustion. The experiments and computer simulation showed ecological efficiency of the plasma technology. When a plasma torch operates in the regime of plasma stabilization of pulverized coal flame, NOX emission is reduced twice and amount of unburned carbon is reduced four times. Acknowledgement: This work was supported by Ministry of Education and Science of the Republic of Kazakhstan and Ministry of Education and Science of the Russian Federation (Agreement on grant No. 14.613.21.0005, project RFMEFI61314X0005).

Keywords: coal, ignition, plasma-fuel system, plasma torch, thermal power plant

Procedia PDF Downloads 249
194 Viability of Permaculture Principles to Sustainable Agriculture Enterprises in Malta

Authors: Byron Baron

Abstract:

Malta is a Mediterranean archipelago presenting a combination of environmental conditions which are less suitable for agriculture. This has resulted in a heavy dependence on agricultural chemicals, as well as over-extraction of groundwater, compounded by concomitant destruction of natural habitat surrounding the land areas used for agriculture. Such prolonged intensive land use has resulted in even greater degradation of Maltese soils. This study was thus designed with the goal of assessing the viability of implementing a sustainable agricultural system based on permaculture practices compared to the traditional local practices applied for intensive farming. The permaculture model was implemented over a period of two years for a number of locally-grown staple crops. The tangible targets included improved soil health, reduced water consumption, increased reliance on renewable energy, increased wild plant and insect diversity, and sustained crop yield. To achieve this in the permaculture test area, numerous practices were introduced. In line with permaculture principles land, tillage was reduced, only natural fertilisers were used, no herbicides or pesticides were used, irrigation was linked to a desalination system with sensors for monitoring soil parameters, mulching was practiced, and a photovoltaic system was installed. Furthermore, areas for wild plants were increased and controlled only by trimming, not mowing. A variety of environmental parameters were measured at regular intervals as well as crop yield (in kilos of produce) in order to quantify if any improvements in crop output and environmental conditions were obtained. The results obtained show a very slight improvement in overall soil health due to the brevity of the test period. Water consumption was reduced by over 50% with no apparent losses or ill effects on the crops. Renewable energy was sufficient to provide all electric power on-site, so apart from the initial investment costs, there were no limitations. Moreover, surrounding the commercial crops with borders of wild plants whilst only taking up less than 15% of the total land area assisted pollination, increased animal visitors, and did not give rise to any pest infestations. The conclusion from this study was that whilst results are promising, more detailed and long-term studies are required to understand the full extent of the implications brought about by such a transition, which hints towards the untapped potential of investing in the available resources on the island with the goal of improving the balance between economic prosperity and ecological sustainability.

Keywords: agronomic measures, ecological amplification, sustainability, permaculture

Procedia PDF Downloads 74
193 Mechanical Testing of Composite Materials for Monocoque Design in Formula Student Car

Authors: Erik Vassøy Olsen, Hirpa G. Lemu

Abstract:

Inspired by the Formula-1 competition, IMechE (Institute of Mechanical Engineers) and Formula SAE (Society of Mechanical Engineers) organize annual competitions for University and College students worldwide to compete with a single-seat race car they have designed and built. The design of the chassis or the frame is a key component of the competition because the weight and stiffness properties are directly related with the performance of the car and the safety of the driver. In addition, a reduced weight of the chassis has a direct influence on the design of other components in the car. Among others, it improves the power to weight ratio and the aerodynamic performance. As the power output of the engine or the battery installed in the car is limited to 80 kW, increasing the power to weight ratio demands reduction of the weight of the chassis, which represents the major part of the weight of the car. In order to reduce the weight of the car, ION Racing team from the University of Stavanger, Norway, opted for a monocoque design. To ensure fulfilment of the above-mentioned requirements of the chassis, the monocoque design should provide sufficient torsional stiffness and absorb the impact energy in case of a possible collision. The study reported in this article is based on the requirements for Formula Student competition. As part of this study, diverse mechanical tests were conducted to determine the mechanical properties and performances of the monocoque design. Upon a comprehensive theoretical study of the mechanical properties of sandwich composite materials and the requirements of monocoque design in the competition rules, diverse tests were conducted including 3-point bending test, perimeter shear test and test for absorbed energy. The test panels were homemade and prepared with an equivalent size of the side impact zone of the monocoque, i.e. 275 mm x 500 mm so that the obtained results from the tests can be representative. Different layups of the test panels with identical core material and the same number of layers of carbon fibre were tested and compared. Influence of the core material thickness was also studied. Furthermore, analytical calculations and numerical analysis were conducted to check compliance to the stated rules for Structural Equivalency with steel grade SAE/AISI 1010. The test results were also compared with calculated results with respect to bending and torsional stiffness, energy absorption, buckling, etc. The obtained results demonstrate that the material composition and strength of the composite material selected for the monocoque design has equivalent structural properties as a welded frame and thus comply with the competition requirements. The developed analytical calculation algorithms and relations will be useful for future monocoque designs with different lay-ups and compositions.

Keywords: composite material, Formula student, ION racing, monocoque design, structural equivalence

Procedia PDF Downloads 476
192 Investigation of Hydrate Formation of Associated Petroleum Gas From Promoter Solutions for the Purpose of Utilization and Reduction of Its Burning

Authors: Semenov Matvei, Stoporev Andrey, Pavelyev Roman, Varfolomeev Mikhail

Abstract:

Gas hydrates are host-guest compounds. Guest molecules can be low molecular weight components of associated petroleum gas (C1-C4 hydrocarbons), carbon dioxide, hydrogen sulfide, and nitrogen. Gas hydrates have a number of unique properties that make them interesting from a technological point of view, for example, for storing hydrocarbon gases in solid form under moderate thermobaric conditions. The hydrate form of gas has a number of advantages, including a significant gas content in the hydrate, relative safety and environmental friendliness of the process. Such technology could be especially useful in cold regions, where hydrate production, storage and transportation can be more energy efficient. Recently, new developments have been proposed that seek to reduce the number of steps to obtain the finished hydrate, for example, using a pressing device/screw inside the reactor. However, the energy consumption required for the hydrate formation process remains a challenge. Thus, the goal of the current work is to study the patterns and mechanisms of the hydrate formation process using small additions of hydrate formation promoters under static conditions. The study of these aspects will help solve the problem of accelerated production of gas hydrates with minimal energy consumption. Currently, new compounds have been developed that can accelerate the formation of methane hydrate with a small amount of promoter in water, not exceeding 0.1% by weight. To test the influence of promoters on the process of hydrate formation, standard experiments are carried out under dynamic conditions with stirring. During such experiments, the time at which hydrate formation begins (induction period), the temperature at which formation begins (supercooling), the rate of hydrate formation, and the degree of conversion of water to hydrate are assessed. This approach helps to determine the most effective compound in comparative experiments with different promoters and select their optimal concentration. These experimental studies made it possible to study the features of the formation of associated petroleum gas hydrate from promoter solutions under static conditions. Phase transformations were studied using high-pressure micro-differential scanning calorimetry under various experimental conditions. Visual studies of the growth mode of methane hydrate depending on the type of promoter were also carried out. The work is an extension of the methodology for studying the effect of promoters on the process of associated petroleum gas hydrate formation in order to identify new ways to accelerate the formation of gas hydrates without the use of mixing. This work presents the results of a study of the process of associated petroleum gas hydrate formation using high-pressure differential scanning micro-calorimetry, visual investigation, gas chromatography, autoclaves study and stability data. It was found that the synthesized compounds multiply the conversion of water into hydrate under static conditions up to 96% due to a change in the growth mechanism of associated petroleum gas hydrate.

Keywords: gas hydrate, gas storage, promotor, associated petroleum gas

Procedia PDF Downloads 33
191 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru

Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama

Abstract:

There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.

Keywords: water economy, simulation, modeling, integration

Procedia PDF Downloads 128
190 Socio-Cultural Economic and Demographic Profile of Return Migration: A Case Study of Mahaboobnagar District in ‘Andhra Pradesh’

Authors: Ramanamurthi Botlagunta

Abstract:

Return migrate on is a process; it’s not a new phenomenal. People are migrating since civilization started. In the case of Indian Diaspora, peoples migrated before the Independence of India. Even after the independence. There are various reasons for the migration. According to the characteristics of the migrants, geographical, political, and economic factors there are many changes occur in the mode of migration. In India currently almost 25 million peoples are outside of the country. But all of them not able to get the immigrants status in their respective host society due to the nature of individual perception and the immigration policies of the host countries. They came back to homeland after spending days/months/years. They are known as the return migrants. Returning migrants are 'persons returning to their country of citizenship after having been international migrants, whether short term or long-term'. Increasingly, migration is seen very differently from what was once believed to be a one-way phenomenon. The renewed interest of return migration can be seen through two aspects one is that growing importance of temporary migration programmers in other countries and other one is that potential role of migrants in developing their home countries. Conceptualized return migration in several ways: occasional return, seasonal return, temporary return, permanent return, and circular return. The reasons for the return migration are retirement, failure to assimilate in the host country, problems with acculturation in the destination country, being unsuccessful in the emigrating country, acquiring the desired wealth, innovate and to serve as change agents in the birth country. With the advent of globalization and the rapid development of transportation systems and communication technologies, this is a process by which immigrants forge and sustain simultaneous multi-stranded social relations that link together their societies of origin and settlement. We can find that Current theories of transnational migration are greatly focused on the economic impacts on the home countries, while social, cultural and political impacts have recently started gaining momentum. This, however, has been changing as globalization is radically transforming the way people move around the world. One of the reasons for the return migration is that lack of proportionate representation of Asian immigrants in positions of authority and decision-making can be a result of challenges confronted in cultural and structural assimilation. The present study mainly focuses socioeconomic and demographic profile of return migration of Indians from other countries in general and particularly on Andhra Pradesh the people who are returning from other countries. Migration is that lack of proportionate representation of Asian immigrants in positions of authority and decision-making can be a result of challenges confronted in cultural and structural assimilation. The present study mainly focuses socioeconomic and demographic profile of return migration of Indians from other countries in general and particularly on Andhra Pradesh the people who are returning from other countries.

Keywords: migration, return migration, globalization, development, socio- economic, Asian immigrants, UN, Andhra Pradesh

Procedia PDF Downloads 345
189 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 95
188 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores

Abstract:

This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.

Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino

Procedia PDF Downloads 145
187 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 52
186 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 91
185 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 285
184 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting

Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva

Abstract:

The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.

Keywords: dipole antenna, double-band, high efficiency, rectenna

Procedia PDF Downloads 94
183 Co-Creation of an Entrepreneurship Living Learning Community: A Case Study of Interprofessional Collaboration

Authors: Palak Sadhwani, Susie Pryor

Abstract:

This paper investigates interprofessional collaboration (IPC) in the context of entrepreneurship education. Collaboration has been found to enhance problem solving, leverage expertise, improve resource allocation, and create organizational efficiencies. However, research suggests that successful collaboration is hampered by individual and organizational characteristics. IPC occurs when two or more professionals work together to solve a problem or achieve a common objective. The necessity for this form of collaboration is particularly prevalent in cross-disciplinary fields. In this study, we utilize social exchange theory (SET) to examine IPC in the context of an entrepreneurship living learning community (LLC) at a large university in the Western United States. Specifically, we explore these research questions: How are rules or norms established that govern the collaboration process? How are resources valued and distributed? How are relationships developed and managed among and between parties? LLCs are defined as groups of students who live together in on-campus housing and share similar academic or special interests. In 2007, the Association of American Colleges and Universities named living communities a high impact practice (HIP) because of their capacity to enhance and give coherence to undergraduate education. The entrepreneurship LLC in this study was designed to offer first year college students the opportunity to live and learn with like-minded students from diverse backgrounds. While the university offers other LLC environments, the target residents for this LLC are less easily identified and are less apparently homogenous than residents of other LLCs on campus (e.g., Black Scholars, LatinX, Women in Science and Education), creating unique challenges. The LLC is a collaboration between the university’s College of Business & Public Administration and the Department of Housing and Residential Education (DHRE). Both parties are contributing staff, technology, living and learning spaces, and other student resources. This paper reports the results an ethnographic case study which chronicles the start-up challenges associated with the co-creation of the LLC. SET provides a general framework for examining how resources are valued and exchanged. In this study, SET offers insights into the processes through which parties negotiate tensions resulting from approaching this shared project from very different perspectives and cultures in a novel project environment. These tensions occur due to a variety of factors, including team formation and management, allocation of resources, and differing output expectations. The results are useful to both scholars and practitioners of entrepreneurship education and organizational management. They suggest probably points of conflict and potential paths towards reconciliation.

Keywords: case study, ethnography, interprofessional collaboration, social exchange theory

Procedia PDF Downloads 113