Search results for: testing simulation
1513 Thermo-Economic Analysis of a Natural Draft Direct Cooling System for a Molten Salt Power Tower
Authors: Huiqiang Yang, Domingo Santana
Abstract:
Reducing parasitic power consumption of concentrating solar power plants is the main challenge to increase the overall efficiency, particularly for molten salt tower technology. One of the most effective approaches to reduce the parasitic power consumption is to implement a natural draft dry cooling system instead of the standard utilized mechanical draft dry cooling system. In this paper, a thermo-economic analysis of a natural draft direct cooling system was performed based on a 100MWe commercial scale molten salt power plant. In this configuration with a natural draft direct cooling system, the exhaust steam from steam turbine flows directly to the heat exchanger bundles inside the natural draft dry cooling tower, which eliminates the power consumption of circulation pumps or fans, although the cooling tower shadows a portion of the heliostat field. The simulation results also show that compared to a mechanical draft cooling system the annual solar field efficiency is decreased by about 0.2% due to the shadow, which is equal to a reduction of approximately 13% of the solar field area. As a contrast, reducing the solar field size by 13% in purpose in a molten salt power plant with a natural draft drying cooling system actually will lead to a reduction of levelized cost of electricity (LCOE) by about 4.06% without interfering the power generated.Keywords: molten salt power tower, natural draft dry cooling, parasitic power consumption, commercial scale
Procedia PDF Downloads 1671512 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection
Procedia PDF Downloads 2561511 The Role of Evaluation for Effective and Efficient Change in Higher Education Institutions
Authors: Pattaka Sa-Ngimnet
Abstract:
That the University as we have known it is no longer serving the needs of the vast majority of students and potential students has been a topic of much discussion. Institutions of higher education, in this age of global culture, are in a process of metamorphosis. Technology is being used to allow more students, older students, working students and disabled students, who cannot attend conventional classes, to have greater access to higher education through the internet. But change must come about only after much evaluation and experimentation or education will simply become a commodity as, in some cases, it already has. This paper will be concerned with the meaning and methods of change and evaluation as they are applied to institutions of higher education. Organization’s generally have different goals and different approaches in order to be successful. However, the means of reaching those goals requires rational and effective planning. Any plans for successful change in any institution must take into account both effectiveness and efficiency and the differences between them. “Effectiveness” refers to an adequate means of achieving an objective. “Efficiency” refers to the ability to achieve an objective without waste of time or resources (The Free Dictionary). So an effective means may not be efficient and an efficient means may not be effective. The goal is to reach a synthesis of effectiveness and efficiency that will maximize both to the extent each is limited by the other. This focus of this paper then is to determine how an educational institution can become either successful or oppressive depending on the kinds of planning, evaluating and changes that operate by and on the administration. If the plan is concerned only with efficiency, the institution can easily become oppressive and lose sight of its purpose of educating students. If it is overly concentrated on effectiveness, the students may receive a superior education in the short run but the institution will face operating difficulties. In becoming only goal oriented, institutions also face problems. Simply stated, if the institution reaches its goals, the stake holders may become satisfied and fail to change and keep up with the needs of the times. So goals should be seen only as benchmarks in a process of becoming even better in providing quality education. Constant and consistent evaluation is the key to making all these factors come together in a successful process of planning, testing and changing the plans as needed. The focus of the evaluation has to be considered. Evaluations must take into account progress and needs of students, methods and skills of instructors, resources available from the institution and the styles and objectives of administrators. Thus the role of evaluation is pivotal in providing for the maximum of both effective and efficient change in higher education institutions.Keywords: change, effectiveness, efficiency, education
Procedia PDF Downloads 3201510 Determination of Safe Ore Extraction Methodology beneath Permanent Extraction in a Lead Zinc Mine with the Help of FLAC3D Numerical Model
Authors: Ayan Giri, Lukaranjan Phukan, Shantanu Karmakar
Abstract:
Structure and tectonics play a vital role in ore genesis and deposition. The existence of a swelling structure below the current level of a mine leads to the discovery of ores below some permeant developments of the mine. The discovery and the extraction of the ore body are very critical to sustain the business requirement of the mine. The challenge was to extract the ore without hampering the global stability of the mine. In order to do so, different mining options were considered and analysed by numerical modelling in FLAC3d software. The constitutive model prepared for this simulation is the improved unified constitutive model, which can better and more accurately predict the stress-strain relationships in a continuum model. The IUCM employs the Hoek-Brown criterion to determine the instantaneous Mohr-Coulomb parameters cohesion (c) and friction (ɸ) at each level of confining stress. The extra swelled part can be dimensioned as north-south strike width 50m, east-west strike width 50m. On the north side, already a stope (P1) is excavated of the dimension of 25m NS width. The different options considered were (a) Open stoping of extraction of southern part (P0) of 50m to the full extent, (b) Extraction of the southern part of 25m, then filling of both the primaries and extraction of secondary (S0) 25m in between. (c) Extraction of the southern part (P0) completely, preceded by backfill and modify the design of the secondary (S0) for the overall stability of the permanent excavation above the stoping.Keywords: extraction, IUCM, FLAC 3D, stoping, tectonics
Procedia PDF Downloads 2121509 An Eulerian Method for Fluid-Structure Interaction Simulation Applied to Wave Damping by Elastic Structures
Authors: Julien Deborde, Thomas Milcent, Stéphane Glockner, Pierre Lubin
Abstract:
A fully Eulerian method is developed to solve the problem of fluid-elastic structure interactions based on a 1-fluid method. The interface between the fluid and the elastic structure is captured by a level set function, advected by the fluid velocity and solved with a WENO 5 scheme. The elastic deformations are computed in an Eulerian framework thanks to the backward characteristics. We use the Neo Hookean or Mooney Rivlin hyperelastic models and the elastic forces are incorporated as a source term in the incompressible Navier-Stokes equations. The velocity/pressure coupling is solved with a pressure-correction method and the equations are discretized by finite volume schemes on a Cartesian grid. The main difficulty resides in that large deformations in the fluid cause numerical instabilities. In order to avoid these problems, we use a re-initialization process for the level set and linear extrapolation of the backward characteristics. First, we verify and validate our approach on several test cases, including the benchmark of FSI proposed by Turek. Next, we apply this method to study the wave damping phenomenon which is a mean to reduce the waves impact on the coastline. So far, to our knowledge, only simulations with rigid or one dimensional elastic structure has been studied in the literature. We propose to place elastic structures on the seabed and we present results where 50 % of waves energy is absorbed.Keywords: damping wave, Eulerian formulation, finite volume, fluid structure interaction, hyperelastic material
Procedia PDF Downloads 3231508 Efficient Estimation for the Cox Proportional Hazards Cure Model
Authors: Khandoker Akib Mohammad
Abstract:
While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood
Procedia PDF Downloads 1441507 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi
Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur
Abstract:
Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi
Procedia PDF Downloads 2451506 Extreme Heat and Workforce Health in Southern Nevada
Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria
Abstract:
Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.Keywords: heat, health, hazards, workforce
Procedia PDF Downloads 1041505 Coordinated Interference Canceling Algorithm for Uplink Massive Multiple Input Multiple Output Systems
Authors: Messaoud Eljamai, Sami Hidouri
Abstract:
Massive multiple-input multiple-output (MIMO) is an emerging technology for new cellular networks such as 5G systems. Its principle is to use many antennas per cell in order to maximize the network's spectral efficiency. Inter-cellular interference remains a fundamental problem. The use of massive MIMO will not derogate from the rule. It improves performances only when the number of antennas is significantly greater than the number of users. This, considerably, limits the networks spectral efficiency. In this paper, a coordinated detector for an uplink massive MIMO system is proposed in order to mitigate the inter-cellular interference. The proposed scheme combines the coordinated multipoint technique with an interference-cancelling algorithm. It requires the serving cell to send their received symbols, after processing, decision and error detection, to the interfered cells via a backhaul link. Each interfered cell is capable of eliminating intercellular interferences by generating and subtracting the user’s contribution from the received signal. The resulting signal is more reliable than the original received signal. This allows the uplink massive MIMO system to improve their performances dramatically. Simulation results show that the proposed detector improves system spectral efficiency compared to classical linear detectors.Keywords: massive MIMO, COMP, interference canceling algorithm, spectral efficiency
Procedia PDF Downloads 1471504 Voltage Stability Margin-Based Approach for Placement of Distributed Generators in Power Systems
Authors: Oludamilare Bode Adewuyi, Yanxia Sun, Isaiah Gbadegesin Adebayo
Abstract:
Voltage stability analysis is crucial to the reliable and economic operation of power systems. The power system of developing nations is more susceptible to failures due to the continuously increasing load demand, which is not matched with generation increase and efficient transmission infrastructures. Thus, most power systems are heavily stressed, and the planning of extra generation from distributed generation sources needs to be efficiently done so as to ensure the security of the power system. Some voltage stability index-based approach for DG siting has been reported in the literature. However, most of the existing voltage stability indices, though sufficient, are found to be inaccurate, especially for overloaded power systems. In this paper, the performance of a relatively different approach using a line voltage stability margin indicator, which has proven to have better accuracy, has been presented and compared with a conventional line voltage stability index for DG siting using the Nigerian 28 bus system. Critical boundary index (CBI) for voltage stability margin estimation was deployed to identify suitable locations for DG placement, and the performance was compared with DG placement using the Novel Line Stability Index (NLSI) approach. From the simulation results, both CBI and NLSI agreed greatly on suitable locations for DG on the test system; while CBI identified bus 18 as the most suitable at system overload, NLSI identified bus 8 to be the most suitable. Considering the effect of the DG placement at the selected buses on the voltage magnitude profile, the result shows that the DG placed on bus 18 identified by CBI improved the performance of the power system better.Keywords: voltage stability analysis, voltage collapse, voltage stability index, distributed generation
Procedia PDF Downloads 931503 Modeling of Combustion Process in the Piston Aircraft Engine Using a MCFM-3Z Model
Authors: Marcin Szlachetka, Konrad Pietrykowski
Abstract:
Modeling of a combustion process in a 9-cylinder aircraft engine is presented. The simulations of the combustion process in the IC engine have provided the information on the spatial and time distributions of selected quantities within the combustion chamber of the engine. The numerical analysis results have been compared with the results of indication process of the engine on the test stand. Modeling of combustion process an auto-ignited IC engine in the AVL Fire was carried out within the study. For the calculations, a ECFM-3Z model was used. Verification of simulation results was carried out by comparison of the pressure in the cylinder. The courses of indicated pressure, obtained from the simulations and during the engine tests mounted on a test stand were compared. The engine was braked by the propeller, which results in an adequate external power characteristics. The test object is a modified ASz-62IR engine with the injection system. The engine was running at take-off power. To check the optimum ignition timing regarding power, calculations, tests were performed for 7 different moments of ignition. The analyses of temperature distribution in the cylinder depending on the moments of ignition were carried out. Additional the course of pressure in the cylinder at different angles of ignition delays of the second spark plug were examined. The swirling of the mixture in the combustion chamber was also analysed. It has been shown that the largest vortexes occur in the middle of the chamber, and gets smaller, closer to the combustion chamber walls. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: CFD, combustion, internal combustion engine, aircraft engine
Procedia PDF Downloads 3721502 Immunoinformatic Design and Evaluation of an Epitope-Based Tetravalent Vaccine against Human Hand, Foot, and Mouth Disease
Authors: Aliyu Maje Bello, Yaowaluck Maprang Roshorm
Abstract:
Hand, foot, and mouth disease (HFMD) is a highly contagious viral infection affecting mostly infants and children. Although the Enterovirus A71 (EV71) is usually the major causative agent of HFMD, other enteroviruses such as coxsackievirus A16, A10, and A6 are also found in some of the recent outbreaks. The commercially available vaccines have demonstrated their effectiveness against only EV71 infection but no protection against other enteroviruses. To address the limitation of the monovalent EV71 vaccine, the present study thus designed a tetravalent vaccine against the four major enteroviruses causing HFMD and primarily evaluated the designed vaccine using an immunoinformatics approach. The immunogen was designed to contain the EV71 VP1 protein and multiple reported epitopes from all four distinct enteroviruses and thus designated a tetravalent vaccine. The 3D structure of the designed tetravalent vaccine was modeled, refined, and validated. Epitope screening showed the presence of B-cell, CTL, CD4 T cell, and IFN epitopes with vast application among the Asian population. Docking analysis confirmed the stable and strong binding interactions between the immunogen and immune receptor B-cell receptor (BCR). In silico cloning and immune simulation analyses guaranteed high efficiency and sufficient expression of the vaccine candidate in humans. Overall, the promising results obtained from the in-silico studies of the proposed tetravalent vaccine make it a potential candidate worth further experimental validation.Keywords: enteroviruses, coxsackieviruses, hand foot and mouth disease, immunoinformatics, tetravalent vaccine
Procedia PDF Downloads 721501 A Method for Evaluating the Mechanical Stress on Mandibular Advancement Devices
Authors: Tsung-yin Lin, Yi-yu Lee, Ching-hua Hung
Abstract:
Snoring, the lay term for obstructive breathing during sleep, is one of the most prevalent of obnoxious human habits. Loud snoring usually makes others feel noisy and uncomfortable. Snoring also influences the sleep quality of snorers’ bed partners, because of the noise they do not get to sleep easily. Snoring causes the reduce of sleep quality leading to several medical problems, such as excessive daytime sleepiness, high blood pressure, increased risk for cardiovascular disease and cerebral vascular accident, and etc. There are many non-prescription devices offered for sale on the market, but very limited data are available to support a beneficial effect of these devices on snoring and use in treating obstructive sleep apnea (OSA). Mandibular advancement devices (MADs), also termed as the Mandibular reposition devices (MRDs) are removable devices which are worn at night during sleep. Most devices require dental impression, bite registration, and fabrication by a dental laboratory. Those devices are fixed to upper and lower teeth and are adjusted to advance the mandible. The amount of protrusion is adjusted to meet the therapeutic requirements, comfort, and tolerance. Many devices have a fixed degree of advancement. Some are adjustable in a limited degree. This study focuses on the stress analysis of Mandibular Advancement Devices (MADs), which are considered as a standard treatment of snoring that promoted by American Academy of Sleep Medicine (AASM). This paper proposes a new MAD design, and the finite element analysis (FEA) is introduced to precede the stress simulation for this MAD.Keywords: finite element analysis, mandibular advancement devices, mechanical stress, snoring
Procedia PDF Downloads 3561500 Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV
Authors: Joyce K. Anastasi, Bernadette Capili
Abstract:
Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV Our research team focuses on developing and testing protocols to manage chronic symptoms. For many years, our team designed and implemented symptom management studies for people living with HIV (PLWH). We identify symptoms that are not curative and are not adequately controlled by conventional therapies. As an exemplar, we describe how we successfully engaged PLWH in developing and refining our research feasibility protocol for distal sensory peripheral neuropathy (DSP) associated with HIV. With input from PLWH with DSP, our research received National Institutes of Health (NIH) research funding support. Significance: DSP is one of the most common neurologic complications in HIV. It is estimated that DSP affects 21% to 50% of PLWH. The pathogenesis of DSP in HIV is complex and unclear. Proposed mechanisms include cytokine dysregulation, viral protein-produced neurotoxicity, and mitochondrial dysfunction associated with antiretroviral medications. There are no FDA-approved treatments for DSP in HIV. Purpose: Aims: 1) to explore the impact of DSP on the lives of PLWH, 2) to identify patients’ perspectives on successful treatments for DSP, 3) to identify interventions considered feasible and sensitive to the needs of PLWH with DSP, and 4) to obtain participant input for protocol/study design. Description of Process: We conducted a needs assessment with PLWH with DSP. From our needs assessment, we learned from the patients’ perspective detailed descriptions of their symptoms; physical functioning with DSP; self-care remedies tried, and desired interventions. We also asked about protocol scheduling, instrument clarity, study compensation, study-related burdens, and willingness to participate in a randomized controlled trial (RCT) with a placebo and a waitlist group. Implications: We incorporated many of the suggestions learned from the need assessment. We developed and completed a feasibility study that provided us with invaluable information that informed subsequent NIH-funded studies. In addition to our extensive clinical and research experience working with PLWH, learning from the patient perspective helped in developing our protocol and promoting a successful plan for recruitment and retention of study participants.Keywords: clinical trial development, peripheral neuropathy, traditional medicine, HIV, AIDS
Procedia PDF Downloads 841499 Biological Control of Karnal Bunt by Pseudomonas fluorescens
Authors: Geetika Vajpayee, Sugandha Asthana, Pratibha Kumari, Shanthy Sundaram
Abstract:
Pseudomonas species possess a variety of promising properties of antifungal and growth promoting activities in the wheat plant. In the present study, Pseudomonas fluorescens MTCC-9768 is tested against plant pathogenic fungus Tilletia indica, causing Karnal bunt, a quarantine disease of wheat (Triticum aestivum) affecting kernels of wheat. It is one of the 1/A1 harmful diseases of wheat worldwide under EU legislation. This disease develops in the growth phase by the spreading of microscopically small spores of the fungus (teliospores) being dispersed by the wind. The present chemical fungicidal treatments were reported to reduce teliospores germination, but its effect is questionable since T. indica can survive up to four years in the soil. The fungal growth inhibition tests were performed using Dual Culture Technique, and the results showed inhibition by 82.5%. The interaction of antagonist bacteria-fungus causes changes in the morphology of hyphae, which was observed using Lactophenol cotton blue staining and Scanning Electron Microscopy (SEM). The rounded and swollen ends, called ‘theca’ were observed in interacted fungus as compared to control fungus (without bacterial interaction). This bacterium was tested for its antagonistic activity like protease, cellulose, HCN production, Chitinase, etc. The growth promoting activities showed increase production of IAA in bacteria. The bacterial secondary metabolites were extracted in different solvents for testing its growth inhibiting properties. The characterization and purification of the antifungal compound were done by Thin Layer Chromatography, and Rf value was calculated (Rf value = 0.54) and compared to the standard antifungal compound, 2, 4 DAPG (Rf value = 0.54). Further, the in vivo experiments showed a significant decrease in the severity of disease in the wheat plant due to direct injection method and seed treatment. Our results indicate that the extracted and purified compound from the antagonist bacteria, P. fluorescens MTCC-9768 may be used as a potential biocontrol agent against T. indica. This also concludes that the PGPR properties of the bacteria may be utilized by incorporating it into bio-fertilizers.Keywords: antagonism, Karnal bunt, PGPR, Pseudomonas fluorescens
Procedia PDF Downloads 4041498 Determination of Cohesive Zone Model’s Parameters Based On the Uniaxial Stress-Strain Curve
Authors: Y. J. Wang, C. Q. Ru
Abstract:
A key issue of cohesive zone models is how to determine the cohesive zone model (CZM) parameters based on real material test data. In this paper, uniaxial nominal stress-strain curve (SS curve) is used to determine two key parameters of a cohesive zone model: the maximum traction and the area under the curve of traction-separation law (TSL). To this end, the true SS curve is obtained based on the nominal SS curve, and the relationship between the nominal SS curve and TSL is derived based on an assumption that the stress for cracking should be the same in both CZM and the real material. In particular, the true SS curve after necking is derived from the nominal SS curve by taking the average of the power law extrapolation and the linear extrapolation, and a damage factor is introduced to offset the true stress reduction caused by the voids generated at the necking zone. The maximum traction of the TSL is equal to the maximum true stress calculated based on the damage factor at the end of hardening. In addition, a simple specimen is simulated by Abaqus/Standard to calculate the critical J-integral, and the fracture energy calculated by the critical J-integral represents the stored strain energy in the necking zone calculated by the true SS curve. Finally, the CZM parameters obtained by the present method are compared to those used in a previous related work for a simulation of the drop-weight tear test.Keywords: dynamic fracture, cohesive zone model, traction-separation law, stress-strain curve, J-integral
Procedia PDF Downloads 5131497 Marginalized Two-Part Joint Models for Generalized Gamma Family of Distributions
Authors: Mohadeseh Shojaei Shahrokhabadi, Ding-Geng (Din) Chen
Abstract:
Positive continuous outcomes with a substantial number of zero values and incomplete longitudinal follow-up are quite common in medical cost data. To jointly model semi-continuous longitudinal cost data and survival data and to provide marginalized covariate effect estimates, a marginalized two-part joint model (MTJM) has been developed for outcome variables with lognormal distributions. In this paper, we propose MTJM models for outcome variables from a generalized gamma (GG) family of distributions. The GG distribution constitutes a general family that includes approximately all of the most frequently used distributions like the Gamma, Exponential, Weibull, and Log Normal. In the proposed MTJM-GG model, the conditional mean from a conventional two-part model with a three-parameter GG distribution is parameterized to provide the marginal interpretation for regression coefficients. In addition, MTJM-gamma and MTJM-Weibull are developed as special cases of MTJM-GG. To illustrate the applicability of the MTJM-GG, we applied the model to a set of real electronic health record data recently collected in Iran, and we provided SAS code for application. The simulation results showed that when the outcome distribution is unknown or misspecified, which is usually the case in real data sets, the MTJM-GG consistently outperforms other models. The GG family of distribution facilitates estimating a model with improved fit over the MTJM-gamma, standard Weibull, or Log-Normal distributions.Keywords: marginalized two-part model, zero-inflated, right-skewed, semi-continuous, generalized gamma
Procedia PDF Downloads 1761496 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System
Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi
Abstract:
Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.Keywords: channel estimation, OFDM, pilot-assist, VLC
Procedia PDF Downloads 1801495 Solar Energy Applications in Seawater Distillation
Authors: Yousef Abdulaziz Almolhem
Abstract:
Geographically, the most Arabic countries locate in areas confined to arid or semiarid regions. For this reason, most of our countries have adopted the seawater desalination as a strategy to overcome this problem. For example, the water supply of AUE, Kuwait, and Saudi Arabia is almost 100% from the seawater desalination plants. Many areas in Saudia Arabia and other countries in the world suffer from lack of fresh water which hinders the development of these areas, despite the availability of saline water and high solar radiation intensity. Furthermore, most developing countries do not have sufficient meteorological data to evaluate if the solar radiation is enough to meet the solar desalination. A mathematical model was developed to simulate and predict the thermal behavior of the solar still which used direct solar energy for distillation of seawater. Measurement data were measured in the Environment and Natural Resources Department, Faculty of Agricultural and Food sciences, King Faisal University, Saudi Arabia, in order to evaluate the present model. The simulation results obtained from this model were compared with the measured data. The main results of this research showed that there are slight differences between the measured and predicted values of the elements studied, which is resultant from the change of some factors considered constants in the model such as the sky clearance, wind velocity and the salt concentration in the water in the basin of the solar still. It can be concluded that the present model can be used to estimate the average total solar radiation and the thermal behavior of the solar still in any area with consideration to the geographical location.Keywords: mathematical model, sea water, distillation, solar radiation
Procedia PDF Downloads 2831494 Technology Valuation of Unconventional Gas R&D Project Using Real Option Approach
Authors: Young Yoon, Jinsoo Kim
Abstract:
The adoption of information and communication technologies (ICT) in all industry is growing under industry 4.0. Many oil companies also are increasingly adopting ICT to improve the efficiency of existing operations, take more accurate and quicker decision making and reduce entire cost by optimization. It is true that ICT is playing an important role in the process of unconventional oil and gas development and companies must take advantage of ICT to gain competitive advantage. In this study, real option approach has been applied to Unconventional gas R&D project to evaluate ICT of them. Many unconventional gas reserves such as shale gas and coal-bed methane(CBM) has developed due to technological improvement and high energy price. There are many uncertainties in unconventional development on the three stage(Exploration, Development, Production). The traditional quantitative benefits-cost method, such as net present value(NPV) is not sufficient for capturing ICT value. We attempted to evaluate the ICT valuation by applying the compound option model; the model is applied to real CBM project case, showing how it consider uncertainties. Variables are treated as uncertain and a Monte Carlo simulation is performed to consider variables effect. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).Keywords: information and communication technologies, R&D, real option, unconventional gas
Procedia PDF Downloads 2291493 Supplier Risk Management: A Multivariate Statistical Modelling and Portfolio Optimization Based Approach for Supplier Delivery Performance Development
Authors: Jiahui Yang, John Quigley, Lesley Walls
Abstract:
In this paper, the authors develop a stochastic model regarding the investment in supplier delivery performance development from a buyer’s perspective. The authors propose a multivariate model through a Multinomial-Dirichlet distribution within an Empirical Bayesian inference framework, representing both the epistemic and aleatory uncertainties in deliveries. A closed form solution is obtained and the lower and upper bound for both optimal investment level and expected profit under uncertainty are derived. The theoretical properties provide decision makers with useful insights regarding supplier delivery performance improvement problems where multiple delivery statuses are involved. The authors also extend the model from a single supplier investment into a supplier portfolio, using a Lagrangian method to obtain a theoretical expression for an optimal investment level and overall expected profit. The model enables a buyer to know how the marginal expected profit/investment level of each supplier changes with respect to the budget and which supplier should be invested in when additional budget is available. An application of this model is illustrated in a simulation study. Overall, the main contribution of this study is to provide an optimal investment decision making framework for supplier development, taking into account multiple delivery statuses as well as multiple projects.Keywords: decision making, empirical bayesian, portfolio optimization, supplier development, supply chain management
Procedia PDF Downloads 2881492 Modeling Fertility and Production of Hazelnut Cultivars through the Artificial Neural Network under Climate Change of Karaj
Authors: Marziyeh Khavari
Abstract:
In recent decades, climate change, global warming, and the growing population worldwide face some challenges, such as increasing food consumption and shortage of resources. Assessing how climate change could disturb crops, especially hazelnut production, seems crucial for sustainable agriculture production. For hazelnut cultivation in the mid-warm condition, such as in Iran, here we present an investigation of climate parameters and how much they are effective on fertility and nut production of hazelnut trees. Therefore, the climate change of the northern zones in Iran has investigated (1960-2017) and was reached an uptrend in temperature. Furthermore, the descriptive analysis performed on six cultivars during seven years shows how this small-scale survey could demonstrate the effects of climate change on hazelnut production and stability. Results showed that some climate parameters are more significant on nut production, such as solar radiation, soil temperature, relative humidity, and precipitation. Moreover, some cultivars have produced more stable production, for instance, Negret and Segorbe, while the Mervill de Boliver recorded the most variation during the study. Another aspect that needs to be met is training and predicting an actual model to simulate nut production through a neural network and linear regression simulation. The study developed and estimated the ANN model's generalization capability with different criteria such as RMSE, SSE, and accuracy factors for dependent and independent variables (environmental and yield traits). The models were trained and tested while the accuracy of the model is proper to predict hazelnut production under fluctuations in weather parameters.Keywords: climate change, neural network, hazelnut, global warming
Procedia PDF Downloads 1321491 Physical Characterization of a Watershed for Correlation with Parameters of Thomas Hydrological Model and Its Application in Iber Hidrodinamic Model
Authors: Carlos Caro, Ernest Blade, Nestor Rojas
Abstract:
This study determined the relationship between basic geo-technical parameters and parameters of the hydro logical model Thomas for water balance of rural watersheds, as a methodological calibration application, applicable in distributed models as IBER model, which represents a distributed system simulation models for unsteady flow numerical free surface. There was an exploration in 25 points (on 15 sub) basin of Rio Piedras (Boy.) obtaining soil samples, to which geo-technical characterization was performed by laboratory tests. Thomas model has a physical characterization of the input area by only four parameters (a, b, c, d). Achieve measurable relationship between geo technical parameters and 4 values of hydro logical parameters helps to determine subsurface, underground and surface flow more agile manner. It is intended in this way to reach some solutions regarding limits initial model parameters on the basis of Thomas geo-technical characterization. In hydro geological models of rural watersheds, calibration is an important process in the characterization of the study area. This step can require a significant computational cost and time, especially if the initial values or parameters before calibration are outside of the geo-technical reality. A better approach in these initial values means optimization of these process through a geo-technical materials area, where is obtained an important approach to the study as in the starting range of variation for the calibration parameters.Keywords: distributed hydrology, hydrological and geotechnical characterization, Iber model
Procedia PDF Downloads 5221490 Greek Teachers' Understandings of Typical Language Development and of Language Difficulties in Primary School Children and Their Approaches to Language Teaching
Authors: Konstantina Georgali
Abstract:
The present study explores Greek teachers’ understandings of typical language development and of language difficulties. Its core aim was to highlight that teachers need to have a thorough understanding of educational linguistics, that is of how language figures in education. They should also be aware of how language should be taught so as to promote language development for all students while at the same time support the needs of children with language difficulties in an inclusive ethos. The study, thus argued that language can be a dynamic learning mechanism in the minds of all children and a powerful teaching tool in the hands of teachers and provided current research evidence to show that structural and morphological particularities of native languages- in this case, of the Greek language- can be used by teachers to enhance children’s understanding of language and simultaneously improve oral language skills for children with typical language development and for those with language difficulties. The research was based on a Sequential Exploratory Mixed Methods Design deployed in three consecutive and integrative phases. The first phase involved 18 exploratory interviews with teachers. Its findings informed the second phase involving a questionnaire survey with 119 respondents. Contradictory questionnaire results were further investigated in a third phase employing a formal testing procedure with 60 children attending Y1, Y2 and Y3 of primary school (a research group of 30 language impaired children and a comparison group of 30 children with typical language development, both identified by their class teachers). Results showed both strengths and weaknesses in teachers’ awareness of educational linguistics and of language difficulties. They also provided a different perspective of children’s language needs and of language teaching approaches that reflected current advances and conceptualizations of language problems and opened a new window on how best they can be met in an inclusive ethos. However, teachers barely used teaching approaches that could capitalize on the particularities of the Greek language to improve language skills for all students in class. Although they seemed to realize the importance of oral language skills and their knowledge base on language related issues was adequate, their practices indicated that they did not see language as a dynamic teaching and learning mechanism that can promote children’s language development and in tandem, improve academic attainment. Important educational implications arose and clear indications of the generalization of findings beyond the Greek educational context.Keywords: educational linguistics, inclusive ethos, language difficulties, typical language development
Procedia PDF Downloads 3821489 Anonymous Gel-Fluid Transition of Solid Supported Lipids
Authors: Asma Poursoroush
Abstract:
Solid-supported lipid bilayers are often used as a simple model for studies of biological membranes. The presence of a solid substrate that interacts attractively with lipid head-groups is expected to affect the phase behavior of the supported bilayer. Molecular dynamics simulations of a coarse-grained model are thus performed to investigate the phase behavior of supported one-component lipid bilayer membranes. Our results show that the attraction of the lipid head groups to the substrate leads to a phase behavior that is different from that of a free standing lipid bilayer. In particular, we found that the phase behaviors of the two leaflets are decoupled in the presence of a substrate. The proximal leaflet undergoes a clear gel-to-fluid phase transition at a temperature lower than that of a free standing bilayer, and that decreases with increasing strength of the substrate-lipid attraction. The distal leaflet, however, undergoes a change from a homogeneous liquid phase at high temperatures to a heterogeneous state consisting of small liquid and gel domains, with the average size of the gel domains that increases with decreasing temperature. While the chain order parameter of the proximal leaflet clearly shows a gel-fluid phase transition, the chain order parameter of the distal leaflet does not exhibit a clear phase transition. The decoupling in the phase behavior of the two leaflets is due to a non-symmteric lipid distribution in the two leaflets resulting from the presence of the substrate.Keywords: membrane, substrate, molecular dynamics, simulation
Procedia PDF Downloads 1951488 Generating Spherical Surface of Wear Drain in Cutting Metal by Finite Element Method Analysis
Authors: D. Kabeya Nahum, L. Y. Kabeya Mukeba
Abstract:
In this work, the design of surface defects some support of the anchor rod ball joint. The future adhesion contact was rocking in manufacture machining, for giving by the numerical analysis of a short simple solution of thermo-mechanical coupled problem in process engineering. The analysis of geometrical evaluation and the quasi-static and dynamic states are discussed in kinematic dimensional tolerances onto surfaces of part. Geometric modeling using the finite element method (FEM) in rough part of such phase provides an opportunity to solve the nonlinearity behavior observed by empirical data to improve the discrete functional surfaces. The open question here is to obtain spherical geometry of drain wear with the operation of rolling. The formulation with (1 ± 0.01) mm thickness near the drain wear semi-finishing tool for studying different angles, do not help the professional factor in design cutting metal related vibration, friction and interface solid-solid of part and tool during this physical complex process, with multi-parameters no-defined in Sobolev Spaces. The stochastic approach of cracking, wear and fretting due to the cutting forces face boundary layers small dimensions thickness of the workpiece and the tool in the machining position is predicted neighbor to ‘Yakam Matrix’.Keywords: FEM, geometry, part, simulation, spherical surface engineering, tool, workpiece
Procedia PDF Downloads 2731487 Limbic Involvement in Visual Processing
Authors: Deborah Zelinsky
Abstract:
The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing
Procedia PDF Downloads 851486 Estimation of Bio-Kinetic Coefficients for Treatment of Brewery Wastewater
Authors: Abimbola M. Enitan, J. Adeyemo
Abstract:
Anaerobic modeling is a useful tool to describe and simulate the condition and behaviour of anaerobic treatment units for better effluent quality and biogas generation. The present investigation deals with the anaerobic treatment of brewery wastewater with varying organic loads. The chemical oxygen demand (COD) and total suspended solids (TSS) of the influent and effluent of the bioreactor were determined at various retention times to generate data for kinetic coefficients. The bio-kinetic coefficients in the modified Stover–Kincannon kinetic and methane generation models were determined to study the performance of anaerobic digestion process. At steady-state, the determination of the kinetic coefficient (K), the endogenous decay coefficient (Kd), the maximum growth rate of microorganisms (µmax), the growth yield coefficient (Y), ultimate methane yield (Bo), maximum utilization rate constant Umax and the saturation constant (KB) in the model were calculated to be 0.046 g/g COD, 0.083 (dˉ¹), 0.117 (d-¹), 0.357 g/g, 0.516 (L CH4/gCODadded), 18.51 (g/L/day) and 13.64 (g/L/day) respectively. The outcome of this study will help in simulation of anaerobic model to predict usable methane and good effluent quality during the treatment of industrial wastewater. Thus, this will protect the environment, conserve natural resources, saves time and reduce cost incur by the industries for the discharge of untreated or partially treated wastewater. It will also contribute to a sustainable long-term clean development mechanism for the optimization of the methane produced from anaerobic degradation of waste in a close system.Keywords: brewery wastewater, methane generation model, environment, anaerobic modeling
Procedia PDF Downloads 2701485 A Constrained Model Predictive Control Scheme for Simultaneous Control of Temperature and Hygrometry in Greenhouses
Authors: Ayoub Moufid, Najib Bennis, Soumia El Hani
Abstract:
The objective of greenhouse climate control is to improve the culture development and to minimize the production costs. A greenhouse is an open system to external environment and the challenge is to regulate the internal climate despite the strong meteorological disturbances. The internal state of greenhouse considered in this work is defined by too relevant and coupled variables, namely inside temperature and hygrometry. These two variables are chosen to describe the internal state of greenhouses due to their importance in the development of plants and their sensitivity to external climatic conditions, sources of weather disturbances. A multivariable model is proposed and validated by considering a greenhouse as black-box system and the least square method is applied to parameters identification basing on collected experimental measures. To regulate the internal climate, we propose a Model Predictive Control (MPC) scheme. This one considers the measured meteorological disturbances and the physical and operational constraints on the control and state variables. A successful feasibility study of the proposed controller is presented, and simulation results show good performances despite the high interaction between internal and external variables and the strong external meteorological disturbances. The inside temperature and hygrometry are tracking nearly the desired trajectories. A comparison study with an On/Off control applied to the same greenhouse confirms the efficiency of the MPC approach to inside climate control.Keywords: climate control, constraints, identification, greenhouse, model predictive control, optimization
Procedia PDF Downloads 2061484 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams
Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche
Abstract:
According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor
Procedia PDF Downloads 318