Search results for: stock prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2975

Search results for: stock prediction

365 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications

Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert

Abstract:

Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.

Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores

Procedia PDF Downloads 137
364 An Analysis of the Regression Hypothesis from a Shona Broca’s Aphasci Perspective

Authors: Esther Mafunda, Simbarashe Muparangi

Abstract:

The present paper tests the applicability of the Regression Hypothesis on the pathological language dissolution of a Shona male adult with Broca’s aphasia. It particularly assesses the prediction of the Regression Hypothesis, which states that the process according to which language is forgotten will be the reversal of the process according to which it will be acquired. The main aim of the paper is to find out whether mirror symmetries between L1 acquisition and L1 dissolution of tense in Shona and, if so, what might cause these regression patterns. The paper also sought to highlight the practical contributions that Linguistic theory can make to solving language-related problems. Data was collected from a 46-year-old male adult with Broca’s aphasia who was receiving speech therapy at St Giles Rehabilitation Centre in Harare, Zimbabwe. The primary data elicitation method was experimental, using the probe technique. The TART (Test for Assessing Reference Time) Shona version in the form of sequencing pictures was used to access tense by Broca’s aphasic and 3.5-year-old child. Using the SPSS (Statistical Package for Social Studies) and Excel analysis, it was established that the use of the future tense was impaired in Shona Broca’s aphasic whilst the present and past tense was intact. However, though the past tense was intact in the male adult with Broca’s aphasic, a reference to the remote past was made. The use of the future tense was also found to be difficult for the 3,5-year-old speaking child. No difficulties were encountered in using the present and past tenses. This means that mirror symmetries were found between L1 acquisition and L1 dissolution of tense in Shona. On the basis of the results of this research, it can be concluded that the use of tense in a Shona adult with Broca’s aphasia supports the Regression Hypothesis. The findings of this study are important in terms of speech therapy in the context of Zimbabwe. The study also contributes to Bantu linguistics in general and to Shona linguistics in particular. Further studies could also be done focusing on the rest of the Bantu language varieties in terms of aphasia.

Keywords: Broca’s Aphasia, regression hypothesis, Shona, language dissolution

Procedia PDF Downloads 87
363 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions

Authors: Oscar E. Cariceo, Claudia V. Casal

Abstract:

Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.

Keywords: cyberbullying, evidence based practice, machine learning, social work research

Procedia PDF Downloads 166
362 An Investigation into the Crystallization Tendency/Kinetics of Amorphous Active Pharmaceutical Ingredients: A Case Study with Dipyridamole and Cinnarizine

Authors: Shrawan Baghel, Helen Cathcart, Biall J. O'Reilly

Abstract:

Amorphous drug formulations have great potential to enhance solubility and thus bioavailability of BCS class II drugs. However, the higher free energy and molecular mobility of the amorphous form lowers the activation energy barrier for crystallization and thermodynamically drives it towards the crystalline state which makes them unstable. Accurate determination of the crystallization tendency/kinetics is the key to the successful design and development of such systems. In this study, dipyridamole (DPM) and cinnarizine (CNZ) has been selected as model compounds. Thermodynamic fragility (m_T) is measured from the heat capacity change at the glass transition temperature (Tg) whereas dynamic fragility (m_D) is evaluated using methods based on extrapolation of configurational entropy to zero 〖(m〗_(D_CE )), and heating rate dependence of Tg 〖(m〗_(D_Tg)). The mean relaxation time of amorphous drugs was calculated from Vogel-Tammann-Fulcher (VTF) equation. Furthermore, the correlation between fragility and glass forming ability (GFA) of model drugs has been established and the relevance of these parameters to crystallization of amorphous drugs is also assessed. Moreover, the crystallization kinetics of model drugs under isothermal conditions has been studied using Johnson-Mehl-Avrami (JMA) approach to determine the Avrami constant ‘n’ which provides an insight into the mechanism of crystallization. To further probe into the crystallization mechanism, the non-isothermal crystallization kinetics of model systems was also analysed by statistically fitting the crystallization data to 15 different kinetic models and the relevance of model-free kinetic approach has been established. In addition, the crystallization mechanism for DPM and CNZ at each extent of transformation has been predicted. The calculated fragility, glass forming ability (GFA) and crystallization kinetics is found to be in good correlation with the stability prediction of amorphous solid dispersions. Thus, this research work involves a multidisciplinary approach to establish fragility, GFA and crystallization kinetics as stability predictors for amorphous drug formulations.

Keywords: amorphous, fragility, glass forming ability, molecular mobility, mean relaxation time, crystallization kinetics, stability

Procedia PDF Downloads 348
361 Aerodynamic Optimization of Oblique Biplane by Using Supercritical Airfoil

Authors: Asma Abdullah, Awais Khan, Reem Al-Ghumlasi, Pritam Kumari, Yasir Nawaz

Abstract:

Introduction: This study verified the potential applications of two Oblique Wing configurations that were initiated by the Germans Aerodynamicists during the WWII. Due to the end of the war, this project was not completed and in this research is targeting the revival of German Oblique biplane configuration. The research draws upon the use of two Oblique wings mounted on the top and bottom of the fuselage through a single pivot. The wings are capable of sweeping at different angles ranging from 0° at takeoff to 60° at cruising Altitude. The top wing, right half, behaves like a forward swept wing and the left half, behaves like a backward swept wing. Vice Versa applies to the lower wing. This opposite deflection of the top and lower wing cancel out the rotary moment created by each wing and the aircraft remains stable. Problem to better understand or solve: The purpose of this research is to investigate the potential of achieving improved aerodynamic performance and efficiency of flight at a wide range of sweep angles. This will help examine the most accurate value for the sweep angle at which the aircraft will possess both stability and better aerodynamics. Explaining the methods used: The Aircraft configuration is designed using Solidworks after which a series of Aerodynamic prediction are conducted, both in the subsonic and the supersonic flow regime. Computations are carried on Ansys Fluent. The results are then compared to theoretical and flight data of different Supersonic fighter aircraft of the same category (AD-1) and with the Wind tunnel testing model at subsonic speed. Results: At zero sweep angle, the aircraft has an excellent lift coefficient value with almost double that found for fighter jets. In acquiring of supersonic speed the sweep angle is increased to maximum 60 degrees depending on the mission profile. General findings: Oblique biplane can be the future fighter jet aircraft because of its high value performance in terms of aerodynamics, cost, structural design and weight.

Keywords: biplane, oblique wing, sweep angle, supercritical airfoil

Procedia PDF Downloads 269
360 Alternative Energy and Carbon Source for Biosurfactant Production

Authors: Akram Abi, Mohammad Hossein Sarrafzadeh

Abstract:

Because of their several advantages over chemical surfactants, biosurfactants have given rise to a growing interest in the past decades. Advantages such as lower toxicity, higher biodegradability, higher selectivity and applicable at extreme temperature and pH which enables them to be used in a variety of applications such as: enhanced oil recovery, environmental and pharmaceutical applications, etc. Bacillus subtilis produces a cyclic lipopeptide, called surfactin, which is one of the most powerful biosurfactants with ability to decrease surface tension of water from 72 mN/m to 27 mN/m. In addition to its biosurfactant character, surfactin exhibits interesting biological activities such as: inhibition of fibrin clot formation, lyses of erythrocytes and several bacterial spheroplasts, antiviral, anti-tumoral and antibacterial properties. Surfactin is an antibiotic substance and has been shown recently to possess anti-HIV activity. However, application of biosurfactants is limited by their high production cost. The cost can be reduced by optimizing biosurfactant production using cheap feed stock. Utilization of inexpensive substrates and unconventional carbon sources like urban or agro-industrial wastes is a promising strategy to decrease the production cost of biosurfactants. With suitable engineering optimization and microbiological modifications, these wastes can be used as substrates for large-scale production of biosurfactants. As an effort to fulfill this purpose, in this work we have tried to utilize olive oil as second carbon source and also yeast extract as second nitrogen source to investigate the effect on both biomass and biosurfactant production improvement in Bacillus subtilis cultures. Since the turbidity of the culture was affected by presence of the oil, optical density was compromised and no longer could be used as an index of growth and biomass concentration. Therefore, cell Dry Weight measurements with applying necessary tactics for removing oil drops to prevent interference with biomass weight were carried out to monitor biomass concentration during the growth of the bacterium. The surface tension and critical micelle dilutions (CMD-1, CMD-2) were considered as an indirect measurement of biosurfactant production. Distinctive and promising results were obtained in the cultures containing olive oil compared to cultures without it: more than two fold increase in biomass production (from 2 g/l to 5 g/l) and considerable reduction in surface tension, down to 40 mN/m at surprisingly early hours of culture time (only 5hr after inoculation). This early onset of biosurfactant production in this culture is specially interesting when compared to the conventional cultures at which this reduction in surface tension is not obtained until 30 hour of culture time. Reducing the production time is a very prominent result to be considered for large scale process development. Furthermore, these results can be used to develop strategies for utilization of agro-industrial wastes (such as olive oil mill residue, molasses, etc.) as cheap and easily accessible feed stocks to decrease the high costs of biosurfactant production.

Keywords: agro-industrial waste, bacillus subtilis, biosurfactant, fermentation, second carbon and nitrogen source, surfactin

Procedia PDF Downloads 295
359 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria

Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo

Abstract:

Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.

Keywords: cost variability, construction projects, future studies, Nigeria

Procedia PDF Downloads 198
358 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 169
357 Electrochemical Bioassay for Haptoglobin Quantification: Application in Bovine Mastitis Diagnosis

Authors: Soledad Carinelli, Iñigo Fernández, José Luis González-Mora, Pedro A. Salazar-Carballo

Abstract:

Mastitis is the most relevant inflammatory disease in cattle, affecting the animal health and causing important economic losses on dairy farms. This disease takes place in the mammary gland or udder when some opportunistic microorganisms, such as Staphylococcus aureus, Streptococcus agalactiae, Corynebacterium bovis, etc., invade the teat canal. According to the severity of the inflammation, mastitis can be classified as sub-clinical, clinical and chronic. Standard methods for mastitis detection include counts of somatic cells, cell culture, electrical conductivity of the milk, and California test (evaluation of “gel-like” matrix consistency after cell lysed with detergents). However, these assays present some limitations for accurate detection of subclinical mastitis. Currently, haptoglobin, an acute phase protein, has been proposed as novel and effective biomarker for mastitis detection. In this work, an electrochemical biosensor based on polydopamine-modified magnetic nanoparticles (MNPs@pDA) for haptoglobin detection is reported. Thus, MNPs@pDA has been synthesized by our group and functionalized with hemoglobin due to its high affinity to haptoglobin protein. The protein was labeled with specific antibodies modified with alkaline phosphatase enzyme for its electrochemical detection using an electroactive substrate (1-naphthyl phosphate) by differential pulse voltammetry. After the optimization of assay parameters, the haptoglobin determination was evaluated in milk. The strategy presented in this work shows a wide range of detection, achieving a limit of detection of 43 ng/mL. The accuracy of the strategy was determined by recovery assays, being of 84 and 94.5% for two Hp levels around the cut off value. Milk real samples were tested and the prediction capacity of the electrochemical biosensor was compared with a Haptoglobin commercial ELISA kit. The performance of the assay has demonstrated this strategy is an excellent and real alternative as screen method for sub-clinical bovine mastitis detection.

Keywords: bovine mastitis, haptoglobin, electrochemistry, magnetic nanoparticles, polydopamine

Procedia PDF Downloads 166
356 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam

Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen

Abstract:

Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.

Keywords: infectious disease, dengue, geospatial data, climate

Procedia PDF Downloads 378
355 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients

Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming

Abstract:

Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.

Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry

Procedia PDF Downloads 289
354 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors

Authors: Chen Liu, Dawit Negussey

Abstract:

EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.

Keywords: geofoam, pressure distribution, tactile pressure sensors, interface

Procedia PDF Downloads 169
353 Evolutionary Prediction of the Viral RNA-Dependent RNA Polymerase of Chandipura vesiculovirus and Related Viral Species

Authors: Maneesh Kumar, Roshan Kamal Topno, Manas Ranjan Dikhit, Vahab Ali, Ganesh Chandra Sahoo, Bhawana, Major Madhukar, Rishikesh Kumar, Krishna Pandey, Pradeep Das

Abstract:

Chandipura vesiculovirus is an emerging (-) ssRNA viral entity belonging to the genus Vesiculovirus of the family Rhabdoviridae, associated with fatal encephalitis in tropical regions. The multi-functionally active viral RNA-dependent RNA polymerase (vRdRp) that has been incorporated with conserved amino acid residues in the pathogens, assigned to synthesize distinct viral polypeptides. The lack of proofreading ability of the vRdRp produces many mutated variants. Here, we have performed the evolutionary analysis of 20 viral protein sequences of vRdRp of different strains of Chandipura vesiculovirus along with other viral species from genus Vesiculovirus inferred in MEGA6.06, employing the Neighbour-Joining method. The p-distance algorithmic method has been used to calculate the optimum tree which showed the sum of branch length of about 1.436. The percentage of replicate trees in which the associated taxa are clustered together in the bootstrap test (1000 replicates), is shown next to the branches. No mutation was observed in the Indian strains of Chandipura vesiculovirus. In vRdRp, 1230(His) and 1231(Arg) are actively participated in catalysis and, are found conserved in different strains of Chandipura vesiculovirus. Both amino acid residues were also conserved in the other viral species from genus Vesiculovirus. Many isolates exhibited maximum number of mutations in catalytic regions in strains of Chandipura vesiculovirus at position 26(Ser→Ala), 47 (Ser→Ala), 90(Ser→Tyr), 172(Gly→Ile, Val), 172(Ser→Tyr), 387(Asn→Ser), 1301(Thr→Ala), 1330(Ala→Glu), 2015(Phe→Ser) and 2065(Thr→Val) which make them variants under different tropical conditions from where they evolved. The result clarifies the actual concept of RNA evolution using vRdRp to develop as an evolutionary marker. Although, a limited number of vRdRp protein sequence similarities for Chandipura vesiculovirus and other species. This might endow with possibilities to identify the virulence level during viral multiplication in a host.

Keywords: Chandipura, (-) ssRNA, viral RNA-dependent RNA polymerase, neighbour-joining method, p-distance algorithmic, evolutionary marker

Procedia PDF Downloads 190
352 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)

Authors: Kutangila Malundama Succes, Koita Mahamadou

Abstract:

In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.

Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation

Procedia PDF Downloads 61
351 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity

Authors: Shivdayal Patel, Suhail Ahmad

Abstract:

Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.

Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling

Procedia PDF Downloads 272
350 Optimization Of Biogas Production Using Co-digestion Feedstocks Via Anaerobic Technologhy

Authors: E Tolufase

Abstract:

The demand, high costs and health implications of using energy derived from hydrocarbon compound have necessitated the continuous search for alternative source of energy. The World energy market is facing some challenges viz: depletion of fossil fuel reserves, population explosion, lack of energy security, economic and urbanization growth and also, in Nigeria some rural areas still depend largely on wood, charcoal, kerosene, petrol among others, as the sources of their energy. To overcome these short falls in energy supply and demand, as well as taking into consideration the risks from global climate change due to effect of greenhouse gas emissions and other pollutants from fossil fuels’ combustion, brought a lot of attention on efficiently harnessing the renewable energy sources. A very promising among the renewable energy resources for a clean energy technology for power production, vehicle and domestic usage is biogas. Therefore, optimization of biogas yield and quality is imperative. Hence, this study investigated yield and quality of biogas using low cost bio-digester and combination of various feed stocks referred to as co-digestion. Batch/Discontinuous Bio-digester type was used because it was cheap, easy, plausible and appropriate for different substrates used to get the desired results. Three substrates were used; cow dung, chicken droppings and lemon grass digested in five separate 21 litre digesters, A, B, C, D, and E and the gas collection system was designed using locally available materials. For single digestion we had; cow dung, chicken droppings, lemon grass, in Bio-digesters A, B, and C respectively, the co-digested three substrates in different mixed ratio 7:1:2 in digester D and E in ratio 5:3:2. The respective feed-stocks materials were collected locally, digested and analyzed in accordance with standard procedures. They were pre-fermented for a period of 10 days before being introduced into the digesters. They were digested for a retention period of 28 days, the physiochemical parameters namely; pressure, temperature, pH, volume of the gas collector system and volume of biogas produced were all closely monitored and recorded daily. The values of pH and temperature ranged 6.0 - 8.0, and 220C- 350C respectively. For the single substrate, bio-digester A(Cow dung only) produced biogas of total volume 0.1607m3(average volume of 0.0054m3 daily),while B (Chicken droppings ) produced 0.1722m3 (average of 0.0057m3 daily) and C (lemon grass) produced 0.1035m3 (average of 0.0035m3 daily). For the co-digested substrates in bio-digester D the total biogas produced was 0.2007m³ (average volume of 0.0067m³ daily) and bio-digester E produced 0.1991m³ (average volume of 0.0066m³ daily) It’s obvious from the results, that combining different substrates gave higher yields than when a singular feed stock was used and also mixing ratio played some roles in the yield improvement. Bio-digesters D and E contained the same substrates but mixed with different ratios, but higher yield was noticed in D with mixing ratio of 7:1:2 than in E with ratio 5:3:2.Therefore, co-digestion of substrates and mixing proportions are important factors for biogas production optimization.

Keywords: anaerobic, batch, biogas, biodigester, digestion, fermentation, optimization

Procedia PDF Downloads 21
349 Addressing Housing Issue at Regional Level Planning: A Case Study of Mumbai Metropolitan Region

Authors: Bhakti Chitale

Abstract:

Mumbai city, which is the business capital of India and one of the most crowded cities in the world, holds the biggest slum in Asia. The Mumbai Metropolitan Region (MMR) occupies an area of 4035 sq.km. with a population of 22.8 million people. This population is mostly urban with 91% of this population living in areas of Municipal Corporations and Councils. Another 3% live in Census Towns. The region has 9 Municipal Corporations, 8 Municipal councils, and around 1000 villages. On the one hand MMR reflects the highest contribution to the Nations overall economy and on the other hand it shows the horrible and intolerable picture of about 2 million people, who are living in slums/without even slum with totally unhygienic conditions and with total loss of hope. The generations are about to get affected adversely if the solution is not worked out. This study is an attempt towards working out the solution. Mumbai Metropolitan Region Development Authority (MMRDA) is state government's authority, specially formed to govern the development of MMR. MMRDA is engaged in long term planning, promotion of new growth centres, implementation of strategic projects and financing infrastructure development. While preparing the master plan for MMR for next 20 years MMRDA conducted a detail study regarding Housing scenario in MMR and possible options for improvement. The author was the in charge officer for the said assignment. This paper puts light on the interesting outcomes of the research study, which ranges from the adverse effects of government policies, automatic responses of housing market, effects on planning processes, and overall changing needs of housing patterns in the world due to changes in the social mechanism. It alarms the urban planners who usually focus on smart infrastructure development, about allied future dangers. This housing study will explain the complexities, realities and needs of innovations in the housing policies all over the world. The paper will explain further few success stories and failure stories of government initiatives with reasons. It gives the clear idea about the differences in needs of housing for people from different economic groups and direct and indirect market pressures on low cost housing. Magical phenomenon came in front like a large percentage of vacant houses is present in spite of the huge need. Housing market gets affected by the developments or any other physical and financial changes taking place in the nearby areas or cities, also by changes in cities which are located far from the region and also by the international investments or policy changes. Instead of just depending on governments actions in case of generation of affordable housing, it becomes equally important to make the housing markets automatically generate such stock and still make them sustainable is the aim of all the movement. In summary, we may say that the paper will sequentially elaborate the complete dynamics of housing in one of the most crowded urban area in the world that is Mumbai Metropolitan Region, with a lot of data, analysis, case studies, and recommendations.

Keywords: Mumbai India, slum housing, region planning, market recommendations

Procedia PDF Downloads 275
348 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 394
347 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump

Authors: Ravi Verma

Abstract:

Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.

Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity

Procedia PDF Downloads 87
346 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 154
345 Structural Health Monitoring using Fibre Bragg Grating Sensors in Slab and Beams

Authors: Pierre van Tonder, Dinesh Muthoo, Kim twiname

Abstract:

Many existing and newly built structures are constructed on the design basis of the engineer and the workmanship of the construction company. However, when considering larger structures where more people are exposed to the building, its structural integrity is of great importance considering the safety of its occupants (Raghu, 2013). But how can the structural integrity of a building be monitored efficiently and effectively. This is where the fourth industrial revolution step in, and with minimal human interaction, data can be collected, analysed, and stored, which could also give an indication of any inconsistencies found in the data collected, this is where the Fibre Bragg Grating (FBG) monitoring system is introduced. This paper illustrates how data can be collected and converted to develop stress – strain behaviour and to produce bending moment diagrams for the utilisation and prediction of the structure’s integrity. Embedded fibre optic sensors were used in this study– fibre Bragg grating sensors in particular. The procedure entailed making use of the shift in wavelength demodulation technique and an inscription process of the phase mask technique. The fibre optic sensors considered in this report were photosensitive and embedded in the slab and beams for data collection and analysis. Two sets of fibre cables have been inserted, one purposely to collect temperature recordings and the other to collect strain and temperature. The data was collected over a time period and analysed used to produce bending moment diagrams to make predictions of the structure’s integrity. The data indicated the fibre Bragg grating sensing system proved to be useful and can be used for structural health monitoring in any environment. From the experimental data for the slab and beams, the moments were found to be64.33 kN.m, 64.35 kN.m and 45.20 kN.m (from the experimental bending moment diagram), and as per the idealistic (Ultimate Limit State), the data of 133 kN.m and 226.2 kN.m were obtained. The difference in values gave room for an early warning system, in other words, a reserve capacity of approximately 50% to failure.

Keywords: fibre bragg grating, structural health monitoring, fibre optic sensors, beams

Procedia PDF Downloads 134
344 Road Accident Blackspot Analysis: Development of Decision Criteria for Accident Blackspot Safety Strategies

Authors: Tania Viju, Bimal P., Naseer M. A.

Abstract:

This study aims to develop a conceptual framework for the decision support system (DSS), that helps the decision-makers to dynamically choose appropriate safety measures for each identified accident blackspot. An accident blackspot is a segment of road where the frequency of accident occurrence is disproportionately greater than other sections on roadways. According to a report by the World Bank, India accounts for the highest, that is, eleven percent of the global death in road accidents with just one percent of the world’s vehicles. Hence in 2015, the Ministry of Road Transport and Highways of India gave prime importance to the rectification of accident blackspots. To enhance road traffic safety and reduce the traffic accident rate, effectively identifying and rectifying accident blackspots is of great importance. This study helps to understand and evaluate the existing methods in accident blackspot identification and prediction that are used around the world and their application in Indian roadways. The decision support system, with the help of IoT, ICT and smart systems, acts as a management and planning tool for the government for employing efficient and cost-effective rectification strategies. In order to develop a decision criterion, several factors in terms of quantitative as well as qualitative data that influence the safety conditions of the road are analyzed. Factors include past accident severity data, occurrence time, light, weather and road conditions, visibility, driver conditions, junction type, land use, road markings and signs, road geometry, etc. The framework conceptualizes decision-making by classifying blackspot stretches based on factors like accident occurrence time, different climatic and road conditions and suggesting mitigation measures based on these identified factors. The decision support system will help the public administration dynamically manage and plan the necessary safety interventions required to enhance the safety of the road network.

Keywords: decision support system, dynamic management, road accident blackspots, road safety

Procedia PDF Downloads 138
343 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event

Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette

Abstract:

Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.

Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas

Procedia PDF Downloads 73
342 The Role of Motivational Beliefs and Self-Regulated Learning Strategies in The Prediction of Mathematics Teacher Candidates' Technological Pedagogical And Content Knowledge (TPACK) Perceptions

Authors: Ahmet Erdoğan, Şahin Kesici, Mustafa Baloğlu

Abstract:

Information technologies have lead to changes in the areas of communication, learning, and teaching. Besides offering many opportunities to the learners, these technologies have changed the teaching methods and beliefs of teachers. What the Technological Pedagogical Content Knowledge (TPACK) means to the teachers is considerably important to integrate technology successfully into teaching processes. It is necessary to understand how to plan and apply teacher training programs in order to balance students’ pedagogical and technological knowledge. Because of many inefficient teacher training programs, teachers have difficulties in relating technology, pedagogy and content knowledge each other. While providing an efficient training supported with technology, understanding the three main components (technology, pedagogy and content knowledge) and their relationship are very crucial. The purpose of this study is to determine whether motivational beliefs and self-regulated learning strategies are significant predictors of mathematics teacher candidates' TPACK perceptions. A hundred seventy five Turkish mathematics teachers candidates responded to the Motivated Strategies for Learning Questionnaire (MSLQ) and the Technological Pedagogical And Content Knowledge (TPACK) Scale. Of the group, 129 (73.7%) were women and 46 (26.3%) were men. Participants' ages ranged from 20 to 31 years with a mean of 23.04 years (SD = 2.001). In this study, a multiple linear regression analysis was used. In multiple linear regression analysis, the relationship between the predictor variables, mathematics teacher candidates' motivational beliefs, and self-regulated learning strategies, and the dependent variable, TPACK perceptions, were tested. It was determined that self-efficacy for learning and performance and intrinsic goal orientation are significant predictors of mathematics teacher candidates' TPACK perceptions. Additionally, mathematics teacher candidates' critical thinking, metacognitive self-regulation, organisation, time and study environment management, and help-seeking were found to be significant predictors for their TPACK perceptions.

Keywords: candidate mathematics teachers, motivational beliefs, self-regulated learning strategies, technological and pedagogical knowledge, content knowledge

Procedia PDF Downloads 477
341 In Vitro Propagation of Vanilla Planifolia Using Nodal Explants and Varied Concentrations of Naphthaleneacetic acid (NAA) and 6-Benzylaminopurine (BAP).

Authors: Jessica Arthur, Duke Amegah, Kingsley Akenten Wiafe

Abstract:

Background: Vanilla planifolia is the only edible fruit of the orchid family (Orchidaceae) among the over 35,000 Orchidaceae species found worldwide. In Ghana, Vanilla was discovered in the wild, but it is underutilized for commercial production, most likely due to a lack of knowledge on the best NAA and BAP combinations for in vitro propagation to promote successfully regenerated plant acclimatization. The growing interest and global demand for elite Vanilla planifolia plants and natural vanilla flavour emphasize the need for an effective industrial-scale micropropagation protocol. Tissue culture systems are increasingly used to grow disease-free plants and reliable in vitro methods can also produce plantlets with typically modest proliferation rates. This study sought to develop an efficient protocol for in vitro propagation of vanilla using nodal explants by testing different concentrations of NAA and BAP, for the proliferation of the entire plant. Methods: Nodal explants with dormant axillary buds were obtained from year-old laboratory-grown Vanilla planifolia plants. MS media was prepared with a nutrient stock solution (containing macronutrients, micronutrients, iron solution and vitamins) and semi-solidified using phytagel. It was supplemented with different concentrations of NAA and BAP to induce multiple shoots and roots (0.5mg/L BAP with NAA at 0, 0.5, 1, 1.5, 2.0mg/L and vice-versa). The explants were sterilized, cultured in labelled test tubes and incubated at 26°C ± 2°C with 16/8 hours light/dark cycle. Data on shoot and root growth, leaf number, node number, and survival percentage were collected over three consecutive two-week periods. The data were square root transformed and subjected to ANOVA and LSD at a 5% significance level using the R statistical package. Results: Shoots emerged at 8 days and roots at 12 days after inoculation with 94% survival rate. It was discovered that for the NAA treatments, MS media supplemented with 2.00 mg/l NAA resulted in the highest shoot length (10.45cm), maximum root number (1.51), maximum shoot number (1.47) and the highest number of leaves (1.29). MS medium containing 1.00 mg/l NAA produced the highest number of nodes (1.62) and root length (14.27cm). Also, a similar growth pattern for the BAP treatments was observed. MS medium supplemented with 1.50 mg/l BAP resulted in the highest shoot length (14.98 cm), the highest number of nodes (4.60), the highest number of leaves (1.75) and the maximum shoot number (1.57). MS medium containing 0.50 mg/l BAP and 1.0 mg/l BAP generated a maximum root number (1.44) and the highest root length (13.25cm), respectively. However, the best concentration combination for maximizing shoot and root was media containing 1.5mg/l BAP combined with 0.5mg/l NAA, and 1.0mg/l NAA combined with 0.5mg/l of BAP respectively. These concentrations were optimum for in vitro growth and production of Vanilla planifolia. Significance: This study presents a standardized protocol for labs to produce clean vanilla plantlets, enhancing cultivation in Ghana and beyond. It provides insights into Vanilla planifolia's growth patterns and hormone responses, aiding future research and cultivation.

Keywords: Vanilla planifolia, In vitro propagation, plant hormones, MS media

Procedia PDF Downloads 57
340 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments

Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic

Abstract:

Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).

Keywords: Croatia, forest fire, geospatial analysis, hydrological response

Procedia PDF Downloads 130
339 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 89
338 The Role of Two Macrophyte Species in Mineral Nutrient Cycling in Human-Impacted Water Reservoirs

Authors: Ludmila Polechonska, Agnieszka Klink

Abstract:

The biogeochemical studies of macrophytes shed light on elements bioavailability, transfer through the food webs and their possible effects on the biota, and provide a basis for their practical application in aquatic monitoring and remediation. Measuring the accumulation of elements in plants can provide time-integrated information about the presence of chemicals in aquatic ecosystems. The aim of the study was to determine and compare the contents of micro- and macroelements in two cosmopolitan macrophytes, submerged Ceratophyllum demersum (hornworth) and free-floating Hydrocharis morsus-ranae (European frog-bit), in order to assess their bioaccumulation potential, elements stock accumulated in each plant and their role in nutrients cycling in small water reservoirs. Sampling sites were designated in 25 oxbow lakes in urban areas in Lower Silesia (SW Poland). In each sampling site, fresh whole plants of C. demersum and H. morsus-ranae were collected from squares of 1x1 meters each where the species coexisted. European frog-bit was separated into leaves, stems and roots. For biomass measurement all plants growing on 1 square meter were collected, dried and weighed. At the same time, water samples were collected from each reservoir and their pH and EC were determined. Water samples were filtered and acidified and plant samples were digested in concentrated nitric acid. Next, the content of Ca, Cu, Fe, K, Mg, Mn, Ni and Zn was determined using atomic absorption method (AAS). Statistical analysis showed that C. demersum and organs of H. morsus-ranae differed significantly in respect of metals content (Kruskal-Wallis Anova, p<0.05). Contents of Cu, Mn, Ni and Zn were higher in hornwort, while European frog-bit contained more Ca, Fe, K, Mg. Bioaccumulation Factors (BCF=content in plant/concentration in water) showed similar pattern of metal bioaccumulation – microelements were more intensively accumulated by hornwort and macroelements by frog-bit. Based on BCF values both species may be positively evaluated as good accumulators of Cu, Fe, Mn, Ni and Zn. However, the distribution of metals in H. morsus-ranae was uneven – the majority of studied elements were retained in roots, which may indicate to existence of physiological barriers developed for dealing with toxicity. Some percent of Ca and K was actively transported to stems, but to leaves Mg only. Although the biomass of C. demersum was two times greater than biomass of H. morsus-ranae, the element off-take was greater only for Cu, Mn, Ni and Zn. Nevertheless, it can be stated that despite a relatively small biomass, compared to other macrophytes, both species may have an influence on the removal of trace elements from aquatic ecosystems and, as they serve as food for some animals, also on the incorporation of toxic elements into food chains. There was a significant positive correlation between content of Mn and Fe in water and roots of H. morus-ranae (R=0.51 and R=0.60, respectively) as well as between Cu concentration in water and in C. demersum (R=0.41) (Spearman rank correlation, p<0.05). High bioaccumulation rates and correlation between plants and water elements concentrations point to their possible use as passive biomonitors of aquatic pollution.

Keywords: aquatic plants, bioaccumulation, biomonitoring, macroelements, phytoremediation, trace metals

Procedia PDF Downloads 179
337 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 68
336 Identification of the Expression of Top Deregulated MiRNAs in Rheumatoid Arthritis and Osteoarthritis

Authors: Hala Raslan, Noha Eltaweel, Hanaa Rasmi, Solaf Kamel, May Magdy, Sherif Ismail, Khalda Amr

Abstract:

Introduction: Rheumatoid arthritis (RA) is an inflammatory, autoimmune disorder with progressive joint damage. Osteoarthritis (OA) is a degenerative disease of the articular cartilage that shows multiple clinical manifestations or symptoms resembling those of RA. Genetic predisposition is believed to be a principal etiological factor for RA and OA. In this study, we aimed to measure the expression of the top deregulated miRNAs that might be the cause of pathogenesis in both diseases, according to our latest NGS analysis. Six of the deregulated miRNAs were selected as they had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis.Methods: Eighty cases were recruited in this study; 45 rheumatoid arthiritis (RA), 30 osteoarthiritis (OA) patients, as well as 20 healthy controls. The selection of the miRNAs from our latest NGS study was done using miRwalk according to the number of their target genes that are members in the KEGG RA pathway. Total RNA was isolated from plasma of all recruited cases. The cDNA was generated by the miRcury RT Kit then used as a template for real-time PCR with miRcury Primer Assays and the miRcury SYBR Green PCR Kit. Fold changes were calculated from CT values using the ΔΔCT method of relative quantification. Results were compared RA vs Controls and OA vs Controls. Target gene prediction and functional annotation of the deregulated miRNAs was done using Mienturnet. Results: Six miRNAs were selected. They were miR-15b-3p, -128-3p, -194-3p, -328-3p, -542-3p and -3180-5p. In RA samples, three of the measured miRNAs were upregulated (miR-194, -542, and -3180; mean Rq= 2.6, 3.8 and 8.05; P-value= 0.07, 0.05 and 0.01; respectively) while the remaining 3 were downregulated (miR-15b, -128 and -328; mean Rq= 0.21, 0.39 and 0.6; P-value= <0.0001, <0.0001 and 0.02; respectively) all with high statistical significance except miR-194. While in OA samples, two of the measured miRNAs were upregulated (miR-194 and -3180; mean Rq= 2.6 and 7.7; P-value= 0.1 and 0.03; respectively) while the remaining 4 were downregulated (miR-15b, -128, -328 and -542; mean Rq= 0.5, 0.03, 0.08 and 0.5; P-value= 0.0008, 0.003, 0.006 and 0.4; respectively) with statistical significance compared to controls except miR-194 and miR-542. The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Conclusion: Five of the studied miRNAs were greatly deregulated in RA and OA, they might be highly involved in the disease pathogenesis and so might be future therapeutic targets. Further functional studies are crucial to assess their roles and actual target genes.

Keywords: MiRNAs, expression, rheumatoid arthritis, osteoarthritis

Procedia PDF Downloads 74