Search results for: measurement models
1471 Numerical Analysis of Solar Cooling System
Authors: Nadia Allouache, Mohamed Belmedani
Abstract:
Energy source is a sustainable, totally inexhaustible and environmentally friendly alternative to the fossil fuels available. It is a renewable and economical energy that can be harnessed sustainably over the long term and thus stabilizes energy costs. Solar cooling technologies have been developed to decrease the augmentation electricity consumption for air conditioning and to displace the peak load during hot summer days. A numerical analysis of thermal and solar performances of an annular finned adsorber, which is the most important component of the adsorption solar refrigerating system, is considered in this work. Different adsorbent/adsorbate pairs, such as activated carbon AC35/methanol, activated carbon AC35/ethanol, and activated carbon BPL/Ammoniac, are undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular finned adsorber. The Wilson and Dubinin- Astakhov models of the solid-adsorbate equilibrium are used to calculate the adsorbed quantity. The porous medium and the fins are contained in the annular space, and the adsorber is heated by solar energy. Effects of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The AC35/methanol pair is the best pair compared to BPL/Ammoniac and AC35/ethanol pairs in terms of system performance. The system performances are sensitive to the fin geometry. For the considered data measured for clear type days of July 2023 in Algeria and Morocco, the performances of the cooling system are very significant in Algeria.Keywords: activated carbon AC35-methanol pair, activated carbon AC35-ethanol pair, activated carbon BPL-ammoniac pair, annular finned adsorber, performance coefficients, numerical analysis, solar cooling system
Procedia PDF Downloads 551470 Simulation Modelling of the Transmission of Concentrated Solar Radiation through Optical Fibres to Thermal Application
Authors: M. Rahou, A. J. Andrews, G. Rosengarten
Abstract:
One of the main challenges in high-temperature solar thermal applications transfer concentrated solar radiation to the load with minimum energy loss and maximum overall efficiency. The use of a solar concentrator in conjunction with bundled optical fibres has potential advantages in terms of transmission energy efficiency, technical feasibility and cost-effectiveness compared to a conventional heat transfer system employing heat exchangers and a heat transfer fluid. In this paper, a theoretical and computer simulation method is described to estimate the net solar radiation transmission from a solar concentrator into and through optical fibres to a thermal application at the end of the fibres over distances of up to 100 m. A key input to the simulation is the angular distribution of radiation intensity at each point across the aperture plane of the optical fibre. This distribution depends on the optical properties of the solar concentrator, in this case, a parabolic mirror with a small secondary mirror with a common focal point and a point-focus Fresnel lens to give a collimated beam that pass into the optical fibre bundle. Since solar radiation comprises a broad band of wavelengths with very limited spatial coherence over the full range of spectrum only ray tracing models absorption within the fibre and reflections at the interface between core and cladding is employed, assuming no interference between rays. The intensity of the radiation across the exit plane of the fibre is found by integrating across all directions and wavelengths. Results of applying the simulation model to a parabolic concentrator and point-focus Fresnel lens with typical optical fibre bundle will be reported, to show how the energy transmission varies with the length of fibre.Keywords: concentrated radiation, fibre bundle, parabolic dish, fresnel lens, transmission
Procedia PDF Downloads 5651469 Investigation of Failure Mechanisms of Composite Laminates with Delamination and Repaired with Bolts
Authors: Shuxin Li, Peihao Song, Haixiao Hu, Dongfeng Cao
Abstract:
The interactive deformation and failure mechanisms, including local bucking/delamination propagation and global bucking, are investigated in this paper with numerical simulation and validation with experimental results. Three dimensional numerical models using ABAQUS brick elements combined with cohesive elements and contact elements are developed to simulate the deformation and failure characteristics of composite laminates with and without delamination under compressive loading. The zero-thickness cohesive elements are inserted on the possible path of delamination propagation, and the inter-laminate behavior is characterized by the mixed-mode traction-separation law. The numerical simulations identified the complex feature of interaction among local buckling and/or delamination propagation and final global bucking for composite laminates with delamination under compressive loading. Firstly there is an interaction between the local buckling and delamination propagation, i.e., local buckling induces delamination propagation, and then delamination growth further enhances the local buckling. Secondly, the interaction between the out-plan deformation caused by local buckling and the global bucking deformation results in final failure of the composite laminates. The simulation results are validated by the good agreement with the experimental results published in the literature. The numerical simulation validated with experimental results revealed that the degradation of the load capacity, in particular of the compressive strength of composite structures with delamination, is mainly attributed to the combined local buckling/delamination propagation effects. Consequently, a simple field-bolt repair approach that can hinder the local buckling and prevent delamination growth is explored. The analysis and simulation results demonstrated field-bolt repair could effectively restore compressive strength of composite laminates with delamination.Keywords: cohesive elements, composite laminates, delamination, local and global bucking, field-bolt repair
Procedia PDF Downloads 1201468 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 3161467 Exploring the Relationships between Job Satisfaction, Work Engagement, and Loyalty of Academic Staff
Authors: Iveta Ludviga, Agita Kalvina
Abstract:
This paper aims to link together the concepts of job satisfaction, work engagement, trust, job meaningfulness and loyalty to the organisation focusing on specific type of employment–academic jobs. The research investigates the relationships between job satisfaction, work engagement and loyalty as well as the impact of trust and job meaningfulness on the work engagement and loyalty. The survey was conducted in one of the largest Latvian higher education institutions and the sample was drawn from academic staff (n=326). Structured questionnaire with 44 reflective type questions was developed to measure toe constructs. Data was analysed using SPSS and Smart-PLS software. Variance based structural equation modelling (PLS-SEM) technique was used to test the model and to predict the most important factors relevant to employee engagement and loyalty. The first order model included two endogenous constructs (loyalty and intention to stay and recommend, and employee engagement), as well as six exogenous constructs (feeling of fair treatment and trust in management; career growth opportunities; compensation, pay and benefits; management; colleagues; teamwork; and finally job meaningfulness). Job satisfaction was developed as second order construct and both: first and second order models were designed for data analysis. It was found that academics are more engaged than satisfied with their work and main reason for that was found to be job meaningfulness, which is significant predictor for work engagement, but not for job satisfaction. Compensation is not significantly related to work engagement, but only to job satisfaction. Trust was not significantly related neither to engagement, nor to satisfaction, however, it appeared to be significant predictor of loyalty and intentions to stay with the University. This paper revealed academic jobs as specific kind of employment where employees can be more engaged than satisfied and highlighted the specific role of job meaningfulness in the University settings.Keywords: job satisfaction, job meaningfulness, higher education, work engagement
Procedia PDF Downloads 2511466 A Feasibility Study of Waste (d) Potential: Synergistic Effect Evaluation by Co-digesting Organic Wastes and Kinetics of Biogas Production
Authors: Kunwar Paritosh, Sanjay Mathur, Monika Yadav, Paras Gandhi, Subodh Kumar, Nidhi Pareek, Vivekanand Vivekanand
Abstract:
A significant fraction of energy is wasted every year managing the biodegradable organic waste inadequately as development and sustainability are the inherent enemies. The management of these waste is indispensable to boost its optimum utilization by converting it to renewable energy resource (here biogas) through anaerobic digestion and to mitigate greenhouse gas emission. Food and yard wastes may prove to be appropriate and potential feedstocks for anaerobic co-digestion for biogas production. The present study has been performed to explore the synergistic effect of co-digesting food waste and yard trimmings from MNIT campus for enhanced biogas production in different ratios in batch tests (37±10C, 90 rpm, 45 days). The results were overwhelming and showed that blending two different organic waste in proper ratio improved the biogas generation considerably, with the highest biogas yield (2044±24 mLg-1VS) that was achieved at 75:25 of food waste to yard waste ratio on volatile solids (VS) basis. The yield was 1.7 and 2.2 folds higher than the mono-digestion of food or yard waste (1172±34, 1016±36mLg-1VS) respectively. The increase in biogas production may be credited to optimum C/N ratio resulting in higher yield. Also Adding TiO2 nanoparticles showed virtually no effect on biogas production as sometimes nanoparticles enhance biogas production. ICP-MS, FTIR analysis was carried out to gain an insight of feedstocks. Modified Gompertz and logistics models were applied for the kinetic study of biogas production where modified Gompertz model showed goodness-of-fit (R2=0.9978) with the experimental results.Keywords: anaerobic co-digestion, biogas, kinetics, nanoparticle, organic waste
Procedia PDF Downloads 3891465 Application of Flory Paterson’s Theory on the Volumetric Properties of Liquid Mixtures: 1,2-Dichloroethane with Aliphatic and Cyclic Ethers
Authors: Linda Boussaid, Farid Brahim Belaribi
Abstract:
The physico-chemical properties of liquid materials in the industrial field, in general, and in that of the chemical industries, in particular, constitutes a prerequisite for the design of equipment, for the resolution of specific problems (related to the techniques of purification and separation, at risk in the transport of certain materials, etc.) and, therefore, at the production stage. Chloroalkanes, ethers constitute three chemical families having an industrial, theoretical and environmental interest. For example, these compounds are used in various applications in the chemical and pharmaceutical industries. In addition, they contribute to the particular thermodynamic behavior (deviation from ideality, association, etc.) of certain mixtures which constitute a severe test for predictive theoretical models. Finally, due to the degradation of the environment in the world, a renewed interest is observed for ethers, because some of their physicochemical properties could contribute to lower pollution (ethers would be used as additives in aqueous fuels.). This work is a thermodynamic, experimental and theoretical study of the volumetric properties of liquid binary systems formed from compounds belonging to the chemical families of chloroalkanes, ethers, having an industrial, theoretical and environmental interest. Experimental determination of the densities and excess volumes of the systems studied, at different temperatures in the interval [278.15-333.15] K and at atmospheric pressure, using an AntonPaar vibrating tube densitometer of the DMA5000 type. This contribution of experimental data, on the volumetric properties of the binary liquid mixtures of 1,2-dichloroethane with an ether, supplemented by an application of the theoretical model of Prigogine-Flory-Patterson PFP, will probably contribute to the enrichment of the thermodynamic database and the further development of the theory of Flory in its Prigogine-Flory-Patterson (PFP) version, for a better understanding of the thermodynamic behavior of these liquid binary mixturesKeywords: prigogine-flory-patterson (pfp), propriétés volumétrique , volume d’excés, ethers
Procedia PDF Downloads 911464 A Perspective on Education to Support Industry 4.0: An Exploratory Study in the UK
Authors: Sin Ying Tan, Mohammed Alloghani, A. J. Aljaaf, Abir Hussain, Jamila Mustafina
Abstract:
Industry 4.0 is a term frequently used to describe the new upcoming industry era. Higher education institutions aim to prepare students to fulfil the future industry needs. Advancement of digital technology has paved the way for the evolution of education and technology. Evolution of education has proven its conservative nature and a high level of resistance to changes and transformation. The gap between the industry's needs and competencies offered generally by education is revealing the increasing need to find new educational models to face the future. The aim of this study was to identify the main issues faced by both universities and students in preparing the future workforce. From December 2018 to April 2019, a regional qualitative study was undertaken in Liverpool, United Kingdom (UK). Interviews were conducted with employers, faculty members and undergraduate students, and the results were analyzed using the open coding method. Four main issues had been identified, which are the characteristics of the future workforce, student's readiness to work, expectations on different roles played at the tertiary education level and awareness of the latest trends. The finding of this paper concluded that the employers and academic practitioners agree that their expectations on each other’s roles are different and in order to face the rapidly changing technology era, students should not only have the right skills, but they should also have the right attitude in learning. Therefore, the authors address this issue by proposing a learning framework known as 'ASK SUMA' framework as a guideline to support the students, academicians and employers in meeting the needs of 'Industry 4.0'. Furthermore, this technology era requires the employers, academic practitioners and students to work together in order to face the upcoming challenges and fast-changing technologies. It is also suggested that an interactive system should be provided as a platform to support the three different parties to play their roles.Keywords: attitude, expectations, industry needs, knowledge, skills
Procedia PDF Downloads 1251463 Determining Cellular Biomarkers Sensitive to Low Damaging Exposure
Authors: Svetlana Guryeva, Inna Kornienko, Elena Petersen
Abstract:
At present, translational medicine is a rapidly developing branch of biomedicine. The main idea of translational medicine is a practical application of fundamental research. One of the possible applications for translational medicine is researching therapies that improve human age-related organism condition. To fill the gap between experiments and clinical practice, it is necessary to create the standardized system for the investigation of different effects on cellular aging models. In this study, primary human fibroblasts derived from patients of different ages were used as a cellular aging model. The senescence-associated β-galactosidase activity, lipofuscin, γ-H2AX, the reactive oxygen species level, and cell death markers (annexin V/propidium iodide) were used as biomarkers of the cell functional state. The effects of damaging exposures (oxidative stress and heat shock), potential positive factors (metformin and acetaminophen), and their combinations were investigated using the described biomarkers. Oxidative stress and heat shock caused the increase in the levels of all biomarkers, and only the cells from young patients partly coped with stress 3 days after the exposures. Metformin improved the state of pretreatment cells from young and old patients. The acetaminophen did not show significant changes in the biomarker levels compare to the action of metformin. This study proved the opportunity to develop a standardized screening system based on biomarkers of the cell functional state to identify potential positive or negative effects of some physical and chemical exposures. Moreover, such a system can be useful for the aims of regenerative medicine to determine the effect of cell pretreatment before transplantation.Keywords: biomarkers, primary fibroblasts, regenerative medicine, senescence, test system, translational medicine
Procedia PDF Downloads 4031462 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 1311461 Measurement and Modelling of HIV Epidemic among High Risk Groups and Migrants in Two Districts of Maharashtra, India: An Application of Forecasting Software-Spectrum
Authors: Sukhvinder Kaur, Ashok Agarwal
Abstract:
Background: For the first time in 2009, India was able to generate estimates of HIV incidence (the number of new HIV infections per year). Analysis of epidemic projections helped in revealing that the number of new annual HIV infections in India had declined by more than 50% during the last decade (GOI Ministry of Health and Family Welfare, 2010). Then, National AIDS Control Organisation (NACO) planned to scale up its efforts in generating projections through epidemiological analysis and modelling by taking recent available sources of evidence such as HIV Sentinel Surveillance (HSS), India Census data and other critical data sets. Recently, NACO generated current round of HIV estimates-2012 through globally recommended tool “Spectrum Software” and came out with the estimates for adult HIV prevalence, annual new infections, number of people living with HIV, AIDS-related deaths and treatment needs. State level prevalence and incidence projections produced were used to project consequences of the epidemic in spectrum. In presence of HIV estimates generated at state level in India by NACO, USIAD funded PIPPSE project under the leadership of NACO undertook the estimations and projections to district level using same Spectrum software. In 2011, adult HIV prevalence in one of the high prevalent States, Maharashtra was 0.42% ahead of the national average of 0.27%. Considering the heterogeneity of HIV epidemic between districts, two districts of Maharashtra – Thane and Mumbai were selected to estimate and project the number of People-Living-with-HIV/AIDS (PLHIV), HIV-prevalence among adults and annual new HIV infections till 2017. Methodology: Inputs in spectrum included demographic data from Census of India since 1980 and sample registration system, programmatic data on ‘Alive and on ART (adult and children)’,‘Mother-Baby pairs under PPTCT’ and ‘High Risk Group (HRG)-size mapping estimates’, surveillance data from various rounds of HSS, National Family Health Survey–III, Integrated Biological and Behavioural Assessment and Behavioural Sentinel Surveillance. Major Findings: Assuming current programmatic interventions in these districts, an estimated decrease of 12% points in Thane and 31% points in Mumbai among new infections in HRGs and migrants is observed from 2011 by 2017. Conclusions: Project also validated decrease in HIV new infection among one of the high risk groups-FSWs using program cohort data since 2012 to 2016. Though there is a decrease in HIV prevalence and new infections in Thane and Mumbai, further decrease is possible if appropriate programme response, strategies and interventions are envisaged for specific target groups based on this evidence. Moreover, evidence need to be validated by other estimation/modelling techniques; and evidence can be generated for other districts of the state, where HIV prevalence is high and reliable data sources are available, to understand the epidemic within the local context.Keywords: HIV sentinel surveillance, high risk groups, projections, new infections
Procedia PDF Downloads 2111460 Mitigation Strategies in the Urban Context of Sydney, Australia
Authors: Hamed Reza Heshmat Mohajer, Lan Ding, Mattheos Santamouris
Abstract:
One of the worst environmental dangers for people who live in cities is the Urban Heat Island (UHI) impact which is anticipated to become stronger in the coming years as a result of climate change. Accordingly, the key aim of this paper is to study the interaction between the urban configuration and mitigation strategies including increasing albedo of the urban environment (reflective material), implementation of Urban Green Infrastructure (UGI) and/or a combination thereof. To analyse the microclimate models of different urban categories in the metropolis of Sydney, this study will assess meteorological parameters using a 3D model simulation tool of computational fluid dynamics (CFD) named ENVI-met. In this study, four main parameters are taken into consideration while assessing the effectiveness of UHI mitigation strategies: ambient air temperature, wind speed/direction, and outdoor thermal comfort. Layouts with present condition simulation studies from the basic model (scenario one) are taken as the benchmark. A base model is used to calculate the relative percentage variations between each scenario. The findings showed that maximum cooling potential across different urban layouts can be decreased by 2.15 °C degrees by combining high-albedo material with flora; besides layouts with open arrangements(OT1) present a highly remarkable improvement in ambient air temperature and outdoor thermal comfort when mitigation technologies applied compare to compact counterparts. Besides all layouts present a higher intensity on the maximum ambient air temperature reduction rather than the minimum ambient air temperature. On the other hand, Scenarios associated with an increase in greeneries are anticipated to have a slight cooling effect, especially on high-rise layouts.Keywords: sustainable urban development, urban green infrastructure, high-albedo materials, heat island effect
Procedia PDF Downloads 941459 An Exploration of Promoting EFL Students’ Language Learning Autonomy Using Multimodal Teaching - A Case Study of an Art University in Western China
Authors: Dian Guan
Abstract:
With the wide application of multimedia and the Internet, the development of teaching theories, and the implementation of teaching reforms, many different university English classroom teaching modes have emerged. The university English teaching mode is changing from the traditional teaching mode based on conversation and text to the multimodal English teaching mode containing discussion, pictures, audio, film, etc. Applying university English teaching models is conducive to cultivating lifelong learning skills. In addition, lifelong learning skills can also be called learners' autonomous learning skills. Learners' independent learning ability has a significant impact on English learning. However, many university students, especially art and design students, don't know how to learn individually. When they become university students, their English foundation is a relative deficiency because they always remember the language in a traditional way, which, to a certain extent, neglects the cultivation of English learners' independent ability. As a result, the autonomous learning ability of most university students is not satisfactory. The participants in this study were 60 students and one teacher in their first year at a university in western China. Two observations and interviews were conducted inside and outside the classroom to understand the impact of a multimodal teaching model of university English on students' autonomous learning ability. The results were analyzed, and it was found that the multimodal teaching model of university English significantly affected learners' autonomy. Incorporating classroom presentations and poster exhibitions into multimodal teaching can increase learners' interest in learning and enhance their learning ability outside the classroom. However, further exploration is needed to develop multimodal teaching materials and evaluate multimodal teaching outcomes. Despite the limitations of this study, the study adopts a scientific research method to analyze the impact of the multimodal teaching mode of university English on students' independent learning ability. It puts forward a different outlook for further research on this topic.Keywords: art university, EFL education, learner autonomy, multimodal pedagogy
Procedia PDF Downloads 1011458 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)
Authors: Kutangila Malundama Succes, Koita Mahamadou
Abstract:
In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation
Procedia PDF Downloads 651457 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects
Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes
Abstract:
Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction
Procedia PDF Downloads 1491456 Symbiotic Functioning, Photosynthetic Induction and Characterisation of Rhizobia Associated with Groundnut, Jack Bean and Soybean from Eswatini
Authors: Zanele D. Ngwenya, Mustapha Mohammed, Felix D. Dakora
Abstract:
Legumes are a major source of biological nitrogen, and therefore play a crucial role in maintaining soil productivity in smallholder agriculture in southern Africa. Through their ability to fix atmospheric nitrogen in root nodules, legumes are a better option for sustainable nitrogen supply in cropping systems than chemical fertilisers. For decades, farmers have been highly receptive to the use of rhizobial inoculants as a source of nitrogen due mainly to the availability of elite rhizobial strains at a much lower compared to chemical fertilisers. To improve the efficiency of the legume-rhizobia symbiosis in African soils would require the use of highly effective rhizobia capable of nodulating a wide range of host plants. This study assessed the morphogenetic diversity, photosynthetic functioning and relative symbiotic effectiveness (RSE) of groundnut, jack bean and soybean microsymbionts in Eswatini soils as a first step to identifying superior isolates for inoculant production. According to the manufacturer's instructions, rhizobial isolates were cultured in yeast-mannitol (YM) broth until the late log phase and the bacterial genomic DNA was extracted using GenElute bacterial genomic DNA kit. The extracted DNA was subjected to enterobacterial repetitive intergenic consensus-PCR (ERIC-PCR) and a dendrogram constructed from the band patterns to assess rhizobial diversity. To assess the N2-fixing efficiency of the authenticated rhizobia, photosynthetic rates (A), stomatal conductance (gs), and transpiration rates (E) were measured at flowering for plants inoculated with the test isolates. The plants were then harvested for nodulation assessment and measurement of plant growth as shoot biomass. The results of ERIC-PCR fingerprinting revealed the presence of high genetic diversity among the microsymbionts nodulating each of the three test legumes, with many of them showing less than 70% ERIC-PCR relatedness. The dendrogram generated from ERIC-PCR profiles grouped the groundnut isolates into 5 major clusters, while the jack bean and soybean isolates were grouped into 6 and 7 major clusters, respectively. Furthermore, the isolates also elicited variable nodule number per plant, nodule dry matter, shoot biomass and photosynthetic rates in their respective host plants under glasshouse conditions. Of the groundnut isolates tested, 38% recorded high relative symbiotic effectiveness (RSE >80), while 55% of the jack bean isolates and 93% of the soybean isolates recorded high RSE (>80) compared to the commercial Bradyrhizobium strains. About 13%, 27% and 83% of the top N₂-fixing groundnut, jack bean and soybean isolates, respectively, elicited much higher relative symbiotic efficiency (RSE) than the commercial strain, suggesting their potential for use in inoculant production after field testing. There was a tendency for both low and high N₂-fixing isolates to group together in the dendrogram from ERIC-PCR profiles, which suggests that RSE can differ significantly among closely related microsymbionts.Keywords: genetic diversity, relative symbiotic effectiveness, inoculant, N₂-fixing
Procedia PDF Downloads 2211455 QSAR Modeling of Germination Activity of a Series of 5-(4-Substituent-Phenoxy)-3-Methylfuran-2(5H)-One Derivatives with Potential of Strigolactone Mimics toward Striga hermonthica
Authors: Strahinja Kovačević, Sanja Podunavac-Kuzmanović, Lidija Jevrić, Cristina Prandi, Piermichele Kobauri
Abstract:
The present study is based on molecular modeling of a series of twelve 5-(4-substituent-phenoxy)-3-methylfuran-2(5H)-one derivatives which have potential of strigolactones mimics toward Striga hermonthica. The first step of the analysis included the calculation of molecular descriptors which numerically describe the structures of the analyzed compounds. The descriptors ALOGP (lipophilicity), AClogS (water solubility) and BBB (blood-brain barrier penetration), served as the input variables in multiple linear regression (MLR) modeling of germination activity toward S. hermonthica. Two MLR models were obtained. The first MLR model contains ALOGP and AClogS descriptors, while the second one is based on these two descriptors plus BBB descriptor. Despite the braking Topliss-Costello rule in the second MLR model, it has much better statistical and cross-validation characteristics than the first one. The ALOGP and AClogS descriptors are often very suitable predictors of the biological activity of many compounds. They are very important descriptors of the biological behavior and availability of a compound in any biological system (i.e. the ability to pass through the cell membranes). BBB descriptor defines the ability of a molecule to pass through the blood-brain barrier. Besides the lipophilicity of a compound, this descriptor carries the information of the molecular bulkiness (its value strongly depends on molecular bulkiness). According to the obtained results of MLR modeling, these three descriptors are considered as very good predictors of germination activity of the analyzed compounds toward S. hermonthica seeds. This article is based upon work from COST Action (FA1206), supported by COST (European Cooperation in Science and Technology).Keywords: chemometrics, germination activity, molecular modeling, QSAR analysis, strigolactones
Procedia PDF Downloads 2871454 Comparison of Mcgrath, Pentax, and Macintosh Laryngoscope in Normal and Cervical Immobilized Manikin by Novices
Authors: Jong Yeop Kim, In Kyong Yi, Hyun Jeong Kwak, Sook Young Lee, Sung Yong Park
Abstract:
Background: Several video laryngoscopes (VLs) were used to facilitate tracheal intubation in the normal and potentially difficult airway, especially by novice personnel. The aim of this study was to compare tracheal intubation performance regarding the time to intubation, glottic view, difficulty, and dental click, by a novice using McGrath VL, Pentax Airway Scope (AWS) and Macintosh laryngoscope in normal and cervical immobilized manikin models. Methods: Thirty-five anesthesia nurses without previous intubation experience were recruited. The participants performed endotracheal intubation in a manikin model at two simulated neck positions (normal and fixed neck via cervical immobilization), using three different devices (McGrath VL, Pentax AWS, and Macintosh direct laryngoscope) at three times each. Performance parameters included intubation time, success rate of intubation, Cormack Lehane laryngoscope grading, dental click, and subjective difficulty score. Results: Intubation time and success rate at the first attempt were not significantly different between the 3 groups in normal airway manikin. In the cervical immobilized manikin, the intubation time was shorter (p = 0.012) and the success rate with the first attempt was significantly higher (p < 0.001) when using McGrath VL and Pentax AWS compared with Macintosh laryngoscope. Both VLs showed less difficulty score (p < 0.001) and more Cormack Lehane grade I (p < 0.001). The incidence of dental clicks was higher with McGrath VL than Macintosh laryngoscope in the normal and cervical immobilized airway (p = 0.005, p < 0.001, respectively). Conclusion: McGrath VL and Pentax AWS resulted in shorter intubation time, higher first attempt success rate, compared with Macintosh laryngoscope by a novice intubator in a cervical immobilized manikin model. McGrath VL could be reduced the risk of dental injury compared with Macintosh laryngoscope in this scenario.Keywords: intubation, manikin, novice, videolaryngoscope
Procedia PDF Downloads 1581453 Decision Making in Medicine and Treatment Strategies
Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi
Abstract:
Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.Keywords: decision making, medicine, treatment strategies, patient
Procedia PDF Downloads 5791452 Criminal Laws Associated with Cyber-Medicine and Telemedicine in Current Law Systems in the World
Authors: Shahryar Eslamitabar
Abstract:
Currently, the internet plays an important role in the various scientific, commercial and service practices. Thanks to information and communication technology, the healthcare industry via the internet, generally known as cyber-medicine, can offer professional medical service in a wider geographical area. Having some appealing benefits such as convenience in offering healthcare services, improved accessibility to the services, enhanced information exchange, cost-effectiveness, time-saving, etc. Tele-health has increasingly developed innovative models of healthcare delivery. However, it presents many potential hazards to cyber-patients, inherent in the use of the system. First, there are legal issues associated with the communication and transfer of information on the internet. These include licensure, malpractice, liabilities and jurisdictions as well as privacy, confidentiality and security of personal data as the most important challenge brought about by this system. Additional items of concern are technological and ethical. Although, there are some rules to deal with pitfalls associated with cyber-medicine practices in the USA and some European countries, yet for all developments, it is being practiced in a legal vacuum in many countries. In addition to the domestic legislations to deal with potential problems arisen from the system, it is also imperative that some international or regional agreement should be developed to achieve the harmonization of laws among countries and states. This article discusses some implications posed by the practice of cyber-medicine in the healthcare system according to the experience of some developed countries using a comparative study of laws. It will also review the status of tele-health laws in Iran. Finally, it is intended to pave the way to outline a plan for countries like Iran, with newly-established judicial system for health laws, to develop appropriate regulations through providing some recommendations.Keywords: tele-health, cyber-medicine, telemedicine, criminal laws, legislations, time-saving
Procedia PDF Downloads 6611451 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 971450 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 4931449 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 931448 Understanding the Utilization of Luffa Cylindrica in the Adsorption of Heavy Metals to Clean Up Wastewater
Authors: Akanimo Emene, Robert Edyvean
Abstract:
In developing countries, a low cost method of wastewater treatment is highly recommended. Adsorption is an efficient and economically viable treatment process for wastewater. The utilisation of this process is based on the understanding of the relationship between the growth environment and the metal capacity of the biomaterial. Luffa cylindrica (LC), a plant material, was used as an adsorbent in adsorption design system of heavy metals. The chemically modified LC was used to adsorb heavy metals ions, lead and cadmium, from aqueous environmental solution at varying experimental conditions. Experimental factors, adsorption time, initial metal ion concentration, ionic strength and pH of solution were studied. The chemical nature and surface area of the tissues adsorbing heavy metals in LC biosorption systems were characterised by using electron microscopy and infra-red spectroscopy. It showed an increase in the surface area and improved adhesion capacity after chemical treatment. Metal speciation of the metal ions showed the binary interaction between the ions and the LC surface as the pH increases. Maximum adsorption was shown between pH 5 and pH 6. The ionic strength of the metal ion solution has an effect on the adsorption capacity based on the surface charge and the availability of the adsorption sites on the LC. The nature of the metal-surface complexes formed as a result of the experimental data were analysed with kinetic and isotherm models. The pseudo second order kinetic model and the two-site Langmuir isotherm model showed the best fit. Through the understanding of this process, there will be an opportunity to provide an alternative method for water purification. This will be provide an option, for when expensive water treatment technologies are not viable in developing countries.Keywords: adsorption, luffa cylindrica, metal-surface complexes, pH
Procedia PDF Downloads 891447 Expert Solutions to Affordable Housing Finance Challenges in Developing Economies
Authors: Timothy Akinwande, Eddie C. M. Hui
Abstract:
Housing the urban poor has remained a challenge for many years across the world, especially in developing economies, despite the apparent research attention and policy interventions. It is apt to investigate the prevalent affordable housing (AH) provision challenges using unconventional approaches. It is pragmatic to thoroughly examine housing experts to provide supply-side solutions to AH challenges and investigate informal settlers to deduce solutions from AH demand viewpoints. This study being the supply-side investigation of an ongoing research, interrogated housing experts to determine significant expert solutions. Focus group discussions and in-depth interviews were conducted with housing experts in Nigeria. Through descriptive, content, and systematic thematic analyses of data, major findings are that deliberate finance models designed for the urban poor are the most significant housing finance solution in developing economies. Other findings are that adequately implemented rent control policies, deliberate PPP approaches like inclusionary housing and land-value capture, and urban renewal programmes to enlighten and tutor the urban poor on how to earn more, spend wisely, and invest in their own better housing will effectively solve AH finance challenges. Study findings are informative for the best approaches to achieve effective, affordable housing finance for the urban poor in Nigeria, which is indispensable for the achievement of sustainable development goals. This research’s originality lies in the exploration of experts’ opinions in relation to AH finance to produce an equation model of critical solutions to AH finance challenges. Study data are useful resources for future pro-poor housing studies. This study makes housing policy-oriented recommendations toward effective, affordable housing for the urban poor in developing countries.Keywords: affordable housing, effective affordable housing, housing policy, housing research, sustainable development, urban poor
Procedia PDF Downloads 861446 Empowering Children through Co-creation: Writing a Book with and for Children about Their First Steps Towards Urban Independence
Authors: Beata Patuszynska
Abstract:
Children are largely absent from Polish social discourse, a fact which is mirrored in urban planning processes. Their absence creates a vicious circle – an unfriendly urban space discourages children from going outside on their own, meaning adults do not see a need to make spaces more friendly for a group, not present. The pandemic and lockdown, with their closed schools and temporary ban on unaccompanied minors on the streets, have only reinforced this. The project – co-writing with children a book concerning their first steps into urban independence - aims at empowering children, enabling them to find their voice when it comes to urban space. The foundation for the book was data collected during research and workshops with children from Warsaw primary schools, aged 7-10 - the age they begin independent travel in the city. The project was carried out with the participation and involvement of children at each creative step. Children were (1) models: the narrator is an 7-year-old boy getting ready for urban independence. He shares his experience as well as the experience of his school friends and his 10-year-old sister, who already travels on her own. Children were (2) teachers: the book is based on authentic children’s stories and experience, along with the author’s findings from research undertaken with children. The material was extended by observations and conclusions made during the pandemic. Children were (3) reviewers: a series of draft chapters from the book underwent review by children during workshops performed in a school. The process demonstrated that all children experience similar pleasures and worries when it comes to interaction with urban space. Furthermore, they also have similar needs that need satisfying. In my article, I will discuss; (1) the advantages of creating together with children; (2) my conclusions on how to work with children in participatory processes; (3) research results: perceptions of urban space by children age 7-10, when they begin their independent travel in the city; the barriers to and pleasures derived from independent urban travel; the influence of the pandemic on children’s feelings and their behaviour in urban spaces.Keywords: children, urban space, co-creation, participation, human rights
Procedia PDF Downloads 1031445 An AI-generated Semantic Communication Platform in HCI Course
Authors: Yi Yang, Jiasong Sun
Abstract:
Almost every aspect of our daily lives is now intertwined with some degree of human-computer interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology, and more. Our HCI courses, named the Media and Cognition course, are constantly updated to reflect state-of-the-art technological advancements such as virtual reality, augmented reality, and artificial intelligence-based interactions. For more than a decade, our course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which have gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. Our latest version of the Human-Computer Interaction course practices a semantic communication platform based on AI-generated techniques. The purpose of this semantic communication is twofold: to extract and transmit task-specific information while ensuring efficient end-to-end communication with minimal latency. An AI-generated semantic communication platform evaluates the retention of signal sources and converts low-retain ability visual signals into textual prompts. These data are transmitted through AI-generated techniques and reconstructed at the receiving end; on the other hand, visual signals with a high retain ability rate are compressed and transmitted according to their respective regions. The platform and associated research are a testament to our students' growing ability to independently investigate state-of-the-art technologies.Keywords: human-computer interaction, media and cognition course, semantic communication, retainability, prompts
Procedia PDF Downloads 1161444 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates
Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery
Abstract:
Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop
Procedia PDF Downloads 951443 Investment and Economic Growth: An Empirical Analysis for Tanzania
Authors: Manamba Epaphra
Abstract:
This paper analyzes the causal effect between domestic private investment, public investment, foreign direct investment and economic growth in Tanzania during the 1970-2014 period. The modified neo-classical growth model that includes control variables such as trade liberalization, life expectancy and macroeconomic stability proxied by inflation is used to estimate the impact of investment on economic growth. Also, the economic growth models based on Phetsavong and Ichihashi (2012), and Le and Suruga (2005) are used to estimate the crowding out effect of public investment on private domestic investment on one hand and foreign direct investment on the other hand. A correlation test is applied to check the correlation among independent variables, and the results show that there is very low correlation suggesting that multicollinearity is not a serious problem. Moreover, the diagnostic tests including RESET regression errors specification test, Breusch-Godfrey serial correlation LM test, Jacque-Bera-normality test and white heteroskedasticity test reveal that the model has no signs of misspecification and that, the residuals are serially uncorrelated, normally distributed and homoskedastic. Generally, the empirical results show that the domestic private investment plays an important role in economic growth in Tanzania. FDI also tends to affect growth positively, while control variables such as high population growth and inflation appear to harm economic growth. Results also reveal that control variables such as trade openness and life expectancy improvement tend to increase real GDP growth. Moreover, a revealed negative, albeit weak, association between public and private investment suggests that the positive effect of domestic private investment on economic growth reduces when public investment-to-GDP ratio exceeds 8-10 percent. Thus, there is a great need for promoting domestic saving so as to encourage domestic investment for economic growth.Keywords: FDI, public investment, domestic private investment, crowding out effect, economic growth
Procedia PDF Downloads 2901442 Prevalence of Mycobacterium Tuberculosis Infection and Rifampicin Resistance among Presumptive Tuberculosis Cases Visiting Tuberculosis Clinic of Adare General Hospital, Southern Ethiopia
Authors: Degineh Belachew Andarge, Tariku Lambiyo Anticho, Getamesay Mulatu Jara, Musa Mohammed Ali
Abstract:
Introduction: Tuberculosis (TB) is a communicable chronic disease causedby Mycobacterium tuberculosis (MTB). About one-third of the world’s population is latently infected with MTB. TB is among the top 10 causes of mortality throughout the globe from a single pathogen. Objective: The aim of this study was to determine the prevalence of tuberculosis,rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosis, and associated factors among presumptive tuberculosis cases attending the tuberculosis clinic of Adare General Hospital located in Hawassa city. Methods: A hospital-based cross-sectional study was conducted among 321 tuberculosis suspected patients from April toJuly 2018. Socio-demographic, environmental, and behavioral data were collected using a structured questionnaire. Sputumspecimens were analyzed using GeneXpert. Data entry was made using Epi info version 7 and analyzed by SPSS version 20. Logistic regression models were used to determine the risk factors. A p-value less than 0.05 was taken as a cut point. Results: In this study, the prevalence of Mycobacterium tuberculosis was 98 (30.5%) with 95% confidence interval (25.5–35.8), and the prevalence of rifampicin-resistant/multidrug-resistantMycobacterium tuberculosis among the 98 Mycobacteriumtuberculosis confirmed cases was 4 (4.1%). The prevalence of rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosisamong the tuberculosis suspected patients was 1.24%. Participants who had a history of treatment with anti-tuberculosisdrugs were more likely to develop rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosis. Conclusions: This study identified relatively high rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosis amongtuberculosis suspected patients in the study area. Early detection of drug-resistant Mycobacterium tuberculosis should be givenenough attention to strengthen the management of tuberculosis cases and improve direct observation therapy short-course and eventually minimize the spread of rifampicin-resistant tuberculosis strain in the community.Keywords: rifampicin resistance, mycobacterium tuberculosis, risk factors, prevalence of TB
Procedia PDF Downloads 111