Search results for: compound Poisson lognormal distribution
5001 Food Foam Characterization: Rheology, Texture and Microstructure Studies
Authors: Rutuja Upadhyay, Anurag Mehra
Abstract:
Solid food foams/cellular foods are colloidal systems which impart structure, texture and mouthfeel to many food products such as bread, cakes, ice-cream, meringues, etc. Their heterogeneous morphology makes the quantification of structure/mechanical relationships complex. The porous structure of solid food foams is highly influenced by the processing conditions, ingredient composition, and their interactions. Sensory perceptions of food foams are dependent on bubble size, shape, orientation, quantity and distribution and determines the texture of foamed foods. The state and structure of the solid matrix control the deformation behavior of the food, such as elasticity/plasticity or fracture, which in turn has an effect on the force-deformation curves. The obvious step in obtaining the relationship between the mechanical properties and the porous structure is to quantify them simultaneously. Here, we attempt to research food foams such as bread dough, baked bread and steamed rice cakes to determine the link between ingredients and the corresponding effect of each of them on the rheology, microstructure, bubble size and texture of the final product. Dynamic rheometry (SAOS), confocal laser scanning microscopy, flatbed scanning, image analysis and texture profile analysis (TPA) has been used to characterize the foods studied. In all the above systems, there was a common observation that when the mean bubble diameter is smaller, the product becomes harder as evidenced by the increase in storage and loss modulus (G′, G″), whereas when the mean bubble diameter is large the product is softer with decrease in moduli values (G′, G″). Also, the bubble size distribution affects texture of foods. It was found that bread doughs with hydrocolloids (xanthan gum, alginate) aid a more uniform bubble size distribution. Bread baking experiments were done to study the rheological changes and mechanisms involved in the structural transition of dough to crumb. Steamed rice cakes with xanthan gum (XG) addition at 0.1% concentration resulted in lower hardness with a narrower pore size distribution and larger mean pore diameter. Thus, control of bubble size could be an important parameter defining final food texture.Keywords: food foams, rheology, microstructure, texture
Procedia PDF Downloads 3345000 A Fully-Automated Disturbance Analysis Vision for the Smart Grid Based on Smart Switch Data
Authors: Bernardo Cedano, Ahmed H. Eltom, Bob Hay, Jim Glass, Raga Ahmed
Abstract:
The deployment of smart grid devices such as smart meters and smart switches (SS) supported by a reliable and fast communications system makes automated distribution possible, and thus, provides great benefits to electric power consumers and providers alike. However, more research is needed before the full utility of smart switch data is realized. This paper presents new automated switching techniques using SS within the electric power grid. A concise background of the SS is provided, and operational examples are shown. Organization and presentation of data obtained from SS are shown in the context of the future goal of total automation of the distribution network. The description of application techniques, the examples of success with SS, and the vision outlined in this paper serve to motivate future research pertinent to disturbance analysis automation.Keywords: disturbance automation, electric power grid, smart grid, smart switches
Procedia PDF Downloads 3094999 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options
Authors: Wajih Abbassi, Zouhaier Ben Khelifa
Abstract:
The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options
Procedia PDF Downloads 4294998 Cellular Mechanisms Involved in the Radiosensitization of Breast- and Lung Cancer Cells by Agents Targeting Microtubule Dynamics
Authors: Elsie M. Nolte, Annie M. Joubert, Roy Lakier, Maryke Etsebeth, Jolene M. Helena, Marcel Verwey, Laurence Lafanechere, Anne E. Theron
Abstract:
Treatment regimens for breast- and lung cancers may include both radiation- and chemotherapy. Ideally, a pharmaceutical agent which selectively sensitizes cancer cells to gamma (γ)-radiation would allow administration of lower doses of each modality, yielding synergistic anti-cancer benefits and lower metastasis occurrence, in addition to decreasing the side-effect profiles. A range of 2-methoxyestradiol (2-ME) analogues, namely 2-ethyl-3-O-sulphamoyl-estra-1,3,5 (10) 15-tetraene-3-ol-17one (ESE-15-one), 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10),15-tetraen-17-ol (ESE-15-ol) and 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10)16-tetraene (ESE-16) were in silico-designed by our laboratory, with the aim of improving the parent compound’s bioavailability in vivo. The main effect of these compounds is the disruption of microtubule dynamics with a resultant mitotic accumulation and induction of programmed cell death in various cancer cell lines. This in vitro study aimed to determine the cellular responses involved in the radiation sensitization effects of these analogues at low doses in breast- and lung cancer cell lines. The oestrogen receptor positive MCF-7-, oestrogen receptor negative MDA-MB-231- and triple negative BT-20 breast cancer cell lines as well as the A549 lung cancer cell line were used. The minimal compound- and radiation doses able to induce apoptosis were determined using annexin-V and cell cycle progression markers. These doses (cell line dependent) were used to pre-sensitize the cancer cells 24 hours prior to 6 gray (Gy) radiation. Experiments were conducted on samples exposed to the individual- as well as the combination treatment conditions in order to determine whether the combination treatment yielded an additive cell death response. Morphological studies included light-, fluorescence- and transmission electron microscopy. Apoptosis induction was determined by flow cytometry employing annexin V, cell cycle analysis, B-cell lymphoma 2 (Bcl-2) signalling, as well as reactive oxygen species (ROS) production. Clonogenic studies were performed by allowing colony formation for 10 days post radiation. Deoxyribonucleic acid (DNA) damage was quantified via γ-H2AX foci and micronuclei quantification. Amplification of the p53 signalling pathway was determined by western blot. Results indicated that exposing breast- and lung cancer cells to nanomolar concentrations of these analogues 24 hours prior to γ-radiation induced more cell death than the compound- and radiation treatments alone. Hypercondensed chromatin, decreased cell density, a damaged cytoskeleton and an increase in apoptotic body formation were observed in cells exposed to the combination treatment condition. An increased number of cells present in the sub-G1 phase as well as increased annexin-V staining, elevation of ROS formation and decreased Bcl-2 signalling confirmed the additive effect of the combination treatment. In addition, colony formation decreased significantly. p53 signalling pathways were significantly amplified in cells exposed to the analogues 24 hours prior to radiation, as was the amount of DNA damage. In conclusion, our results indicated that pre-treatment of breast- and lung cancer cells with low doses of 2-ME analogues sensitized breast- and lung cancer cells to γ-radiation and induced apoptosis more so than the individual treatments alone. Future studies will focus on the effect of the combination treatment on non-malignant cellular counterparts.Keywords: cancer, microtubule dynamics, radiation therapy, radiosensitization
Procedia PDF Downloads 2084997 Tunisian Dung Beetles Fauna: Composition and Biogeographic Affinities
Authors: Imen Labidi, Said Nouira
Abstract:
Dung beetles Scarabaeides of Tunisia constitute a major component of soil fauna, especially in the Mediterranean region. In the first phase of the present study, an intensive investigation of this group following the gathering of all the bibliographic, museological data and based on a recent collection of 17020 specimens in 106 localities in Tunisia, allowed to confirm with certainty the presence of 94 species distributed in 43 genera, 4 families and 3 sub-families. Only 81 species distributed in 38 genres, 4 families, and 3 sub-families, have been found during our prospections. The population of dung beetles Scarabaeides is composed of 58% of Aphodiidae, 39.51% of Scarabaeidae, and 8.64% of Geotrupidae. Biogeographic affinities of the species were determined and showed that 42% of the identified species have a wide Palaearctic distribution, the endemism is very low, only 3 species are endemic to Tunisia Mecynodes demoflysi, Neobodilus marani, and Thorectes demoflysi, 29 species have a wide distribution, 35 are northern and 17 are southern species. Moreover, others are dependent on very specific Biotopes like Sisyphus schaefferi linked to the northwest of Tunisia and Scarabaeus semipunctatus related to the coastal area north of Tunisia.Keywords: dung beetles, Tunisia, composition, biogeography
Procedia PDF Downloads 2494996 Location Choice: The Effects of Network Configuration upon the Distribution of Economic Activities in the Chinese City of Nanning
Authors: Chuan Yang, Jing Bie, Zhong Wang, Panagiotis Psimoulis
Abstract:
Contemporary studies investigating the association between the spatial configuration of the urban network and economic activities at the street level were mostly conducted within space syntax conceptual framework. These findings supported the theory of 'movement economy' and demonstrated the impact of street configuration on the distribution of pedestrian movement and land-use shaping, especially retail activities. However, the effects varied between different urban contexts. In this paper, the relationship between economic activity distribution and the urban configurational characters was examined at the segment level. In the study area, three kinds of neighbourhood types, urban, suburban, and rural neighbourhood, were included. And among all neighbourhoods, three kinds of urban network form, 'tree-like', grid, and organic pattern, were recognised. To investigate the nested effects of urban configuration measured by space syntax approach and urban context, multilevel zero-inflated negative binomial (ZINB) regression models were constructed. Additionally, considering the spatial autocorrelation, spatial lag was also concluded in the model as an independent variable. The random effect ZINB model shows superiority over the ZINB model or multilevel linear (ML) model in the explanation of economic activities pattern shaping over the urban environment. And after adjusting for the neighbourhood type and network form effects, connectivity and syntax centrality significantly affect economic activities clustering. The comparison between accumulative and new established economic activities illustrated the different preferences for economic activity location choice.Keywords: space syntax, economic activities, multilevel model, Chinese city
Procedia PDF Downloads 1244995 Estimation of Energy Losses of Photovoltaic Systems in France Using Real Monitoring Data
Authors: Mohamed Amhal, Jose Sayritupac
Abstract:
Photovoltaic (PV) systems have risen as one of the modern renewable energy sources that are used in wide ranges to produce electricity and deliver it to the electrical grid. In parallel, monitoring systems have been deployed as a key element to track the energy production and to forecast the total production for the next days. The reliability of the PV energy production has become a crucial point in the analysis of PV systems. A deeper understanding of each phenomenon that causes a gain or a loss of energy is needed to better design, operate and maintain the PV systems. This work analyzes the current losses distribution in PV systems starting from the available solar energy, going through the DC side and AC side, to the delivery point. Most of the phenomena linked to energy losses and gains are considered and modeled, based on real time monitoring data and datasheets of the PV system components. An analysis of the order of magnitude of each loss is compared to the current literature and commercial software. To date, the analysis of PV systems performance based on a breakdown structure of energy losses and gains is not covered enough in the literature, except in some software where the concept is very common. The cutting-edge of the current analysis is the implementation of software tools for energy losses estimation in PV systems based on several energy losses definitions and estimation technics. The developed tools have been validated and tested on some PV plants in France, which are operating for years. Among the major findings of the current study: First, PV plants in France show very low rates of soiling and aging. Second, the distribution of other losses is comparable to the literature. Third, all losses reported are correlated to operational and environmental conditions. For future work, an extended analysis on further PV plants in France and abroad will be performed.Keywords: energy gains, energy losses, losses distribution, monitoring, photovoltaic, photovoltaic systems
Procedia PDF Downloads 1764994 Wheeled Robot Stable Braking Process under Asymmetric Traction Coefficients
Authors: Boguslaw Schreyer
Abstract:
During the wheeled robot’s braking process, the extra dynamic vertical forces act on all wheels: left, right, front or rear. Those forces are directed downward on the front wheels while directed upward on the rear wheels. In order to maximize the deceleration, therefore, minimize the braking time and braking distance, we need to calculate a correct torque distribution: the front braking torque should be increased, and rear torque should be decreased. At the same time, we need to provide better transversal stability. In a simple case of all adhesion coefficients being the same under all wheels, the torque distribution may secure the optimal (maximal) control of the robot braking process, securing the minimum braking distance and a minimum braking time. At the same time, the transversal stability is relatively good. At any time, we control the transversal acceleration. In the case of the transversal movement, we stop the braking process and re-apply braking torque after a defined period of time. If we correctly calculate the value of the torques, we may secure the traction coefficient under the front and rear wheels close to its maximum. Also, in order to provide an optimum braking control, we need to calculate the timing of the braking torque application and the timing of its release. The braking torques should be released shortly after the wheels passed a maximum traction coefficient (while a wheels’ slip increases) and applied again after the wheels pass a maximum of traction coefficient (while the slip decreases). The correct braking torque distribution secures the front and rear wheels, passing this maximum at the same time. It guarantees an optimum deceleration control, therefore, minimum braking time. In order to calculate a correct torque distribution, a control unit should receive the input signals of a rear torque value (which changes independently), the robot’s deceleration, and values of the vertical front and rear forces. In order to calculate the timing of torque application and torque release, more signals are needed: speed of the robot: angular speed, and angular deceleration of the wheels. In case of different adhesion coefficients under the left and right wheels, but the same under each pair of wheels- the same under right wheels and the same under left wheels, the Select-Low (SL) and select high (SH) methods are applied. The SL method is suggested if transversal stability is more important than braking efficiency. Often in the case of the robot, more important is braking efficiency; therefore, the SH method is applied with some control of the transversal stability. In the case that all adhesion coefficients are different under all wheels, the front-rear torque distribution is maintained as in all previous cases. However, the timing of the braking torque application and release is controlled by the rear wheels’ lowest adhesion coefficient. The Lagrange equations have been used to describe robot dynamics. Matlab has been used in order to simulate the process of wheeled robot braking, and in conclusion, the braking methods have been selected.Keywords: wheeled robots, braking, traction coefficient, asymmetric
Procedia PDF Downloads 1654993 DEMs: A Multivariate Comparison Approach
Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo
Abstract:
The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variablesKeywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison
Procedia PDF Downloads 1154992 Research on Spatial Allocation Optimization of Urban Elderly Care Facilities Based on ArcGIS Technology
Authors: Qiao Qiao
Abstract:
With the development of The Times, the elderly demand for pension service facilities is increasing. Taking 26 street towns in Jiangjin District of Chongqing as examples, ArcGIS spatial analysis method was used to analyze the distribution status of the elderly population, the core density of the elderly population, and the spatial layout characteristics of institutional elderly care facilities in Jiangjin District of Chongqing. The results showed that there were differences in the structure and aging degree of the elderly population in each street town. There is a certain imbalance between the spatial distribution of the elderly population and the planning and construction of elderly care facilities. The accessibility of elderly care facilities is uneven. Therefore, a genetic algorithm is used to optimize the spatial layout of institutional elderly care facilities, improve the accessibility of facilities, strengthen the participation of multiple subjects, and provide a reference for the future construction planning of elderly care facilities.Keywords: institutional pension facilities, spatial layout, accessibility, ArcGIS
Procedia PDF Downloads 104991 First-Principles Investigation of the Structural and Electronic Properties of Mg1-xBixO
Authors: G. P. Abdel Rahim, M. María Guadalupe Moreno Armenta, Jairo Arbey Rodriguez
Abstract:
We investigated the structure and electronic properties of the compound Mg1-xBixO with varying concentrations of 0, ¼, ½, and ¾ x bismuth in the the NaCl (rock-salt) and WZ (wurtzite) phases. The calculations were performed using the first-principles pseudo-potential method within the framework of spin density functional theory (DFT). Our calculations predict that for Bi concentrations greater than ~70%, the WZ structure is more favorable than the NaCl one and that for x = 0 (pure MgO), x = 0.25 and x = 0.50 of Bi concentration the NaCl structure is more favorable than the WZ one. For x = 0.75 of Bi, a transition from wurtzite towards NaCl is possible, when the pressure is about 22 GPa. Also It has been observed the crystal lattice constant closely follows Vegard’s law, that the bulk modulus and the cohesion energy decrease with the concentration x of Bi.Keywords: DFT, Mg1-xBixO, pseudo-potential, rock-salt, wurtzite
Procedia PDF Downloads 5254990 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing
Authors: Maria Ahmad, Michael Page, Danë Goodsman
Abstract:
‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course
Procedia PDF Downloads 2174989 Influence of Maximum Fatigue Load on Probabilistic Aspect of Fatigue Crack Propagation Life at Specified Grown Crack in Magnesium Alloys
Authors: Seon Soon Choi
Abstract:
The principal purpose of this paper is to find the influence of maximum fatigue load on the probabilistic aspect of fatigue crack propagation life at a specified grown crack in magnesium alloys. The experiments of fatigue crack propagation are carried out in laboratory air under different conditions of the maximum fatigue loads to obtain the fatigue crack propagation data for the statistical analysis. In order to analyze the probabilistic aspect of fatigue crack propagation life, the goodness-of fit test for probability distribution of the fatigue crack propagation life at a specified grown crack is implemented through Anderson-Darling test. The good probability distribution of the fatigue crack propagation life is also verified under the conditions of the maximum fatigue loads.Keywords: fatigue crack propagation life, magnesium alloys, maximum fatigue load, probability
Procedia PDF Downloads 3894988 Distribution of Dynamical and Energy Parameters in Axisymmetric Air Plasma Jet
Authors: Vitas Valinčius, Rolandas Uscila, Viktorija Grigaitienė, Žydrūnas Kavaliauskas, Romualdas Kėželis
Abstract:
Determination of integral dynamical and energy characteristics of high-temperature gas flows is a very important task of gas-dynamic for hazardous substances destruction systems. They are also always necessary for the investigation of high-temperature turbulent flow dynamics, heat and mass transfer. It is well known that distribution of dynamical and thermal characteristics of high-temperature flows and jets is strongly related to heat flux variation over an imposed area of heating. As is visible from numerous experiments and theoretical considerations, the fundamental properties of an isothermal jet are well investigated. However, the establishment of regularities in high-temperature conditions meets certain specific behavior comparing with moderate-temperature jets and flows. Their structures have not been thoroughly studied yet, especially in the cases of plasma ambient. It is well known that the distribution of local plasma jet parameters in high temperature and isothermal jets and flows may significantly differ. High temperature axisymmetric air jet generated by atmospheric pressure DC arc plasma torch was investigated employing enthalpy probe 3.8∙10-3 m of diameter. Distribution of velocities and temperatures were established in different cross-sections of the plasma jet outflowing from 42∙10-3 m diameter pipe at the average mean velocity of 700 m∙s-1, and averaged temperature of 4000 K. It has been found that gas heating fractionally influences shape and values of a dimensionless profile of velocity and temperature in the main zone of plasma jet and has a significant influence in the initial zone of the plasma jet. The width of the initial zone of the plasma jet has been found to be lesser than in the case of isothermal flow. The relation between dynamical thickness and turbulent number of Prandtl has been established along jet axis. Experimental results were generalized in dimensionless form. The presence of convective heating shows that heat transfer in a moving high-temperature jet also occurs due to heat transfer by moving particles of the jet. In this case, the intensity of convective heat transfer is proportional to the instantaneous value of the flow velocity at a given point in space. Consequently, the configuration of the temperature field in moving jets and flows essentially depends on the configuration of the velocity field.Keywords: plasma jet, plasma torch, heat transfer, enthalpy probe, turbulent number of Prandtl
Procedia PDF Downloads 1824987 Pudhaiyal: A Maze-Based Treasure Hunt Game for Tamil Words
Authors: Aarthy Anandan, Anitha Narasimhan, Madhan Karky
Abstract:
Word-based games are popular in helping people to improve their vocabulary skills. Games like ‘word search’ and crosswords provide a smart way of increasing vocabulary skills. Word search games are fun to play, but also educational which actually helps to learn a language. Finding the words from word search puzzle helps the player to remember words in an easier way, and it also helps to learn the spellings of words. In this paper, we present a tile distribution algorithm for a Maze-Based Treasure Hunt Game 'Pudhaiyal’ for Tamil words, which describes how words can be distributed horizontally, vertically or diagonally in a 10 x 10 grid. Along with the tile distribution algorithm, we also present an algorithm for the scoring model of the game. The proposed game has been tested with 20,000 Tamil words.Keywords: Pudhaiyal, Tamil word game, word search, scoring, maze, algorithm
Procedia PDF Downloads 4424986 Analysis of Cyclic Elastic-Plastic Loading of Shaft Based on Kinematic Hardening Model
Authors: Isa Ahmadi, Ramin Khamedi
Abstract:
In this paper, the elasto-plastic and cyclic torsion of a shaft is studied using a finite element method. The Prager kinematic hardening theory of plasticity with the Ramberg and Osgood stress-strain equation is used to evaluate the cyclic loading behavior of the shaft under the torsional loading. The material of shaft is assumed to follow the non-linear strain hardening property based on the Prager model. The finite element method with C1 continuity is developed and used for solution of the governing equations of the problem. The successive substitution iterative method is used to calculate the distribution of stresses and plastic strains in the shaft due to cyclic loads. The shear stress, effective stress, residual stress and elastic and plastic shear strain distribution are presented in the numerical results.Keywords: cyclic loading, finite element analysis, Prager kinematic hardening model, torsion of shaft
Procedia PDF Downloads 4084985 Chemical Constituents of Silene Arenarioides Desf
Authors: Haba Hamada, Lavaud Cathrine, Benkhaled Mohammed
Abstract:
The Silene genus is the most representative of the caryophyllaceae family for their rich content in secondary metabolites; saponins, flavonoids and flavonoids glycosides, phytoecdysones, oligosaccharides have been isolated and identified. The Silene genus represented by about 700 species in the temrerate region of the word, the main concentration of spcies is Europe, Asia and North Africa. Three known compounds 1-3 were isolated from the aerial parts of Silene arenarioides Desf. by using different chromatographic methods. The structures of the isolated compounds were determined as stigmasterolglycoside, Soyacerebroside, maltol glycoside. The structures of the isolated compounds were determined by using the NMR (1H-NMR, 13C-NMR, COSY, HSQC, and HMBC) techniques and mass spectroscopy. The antimicrobial and antioxydant activities of the different extracts and compound have been reported.Keywords: caryophyllaceae, flavonoids, saponosids, flavonoids glycosides
Procedia PDF Downloads 4034984 Leverage Effect for Volatility with Generalized Laplace Error
Authors: Farrukh Javed, Krzysztof Podgórski
Abstract:
We propose a new model that accounts for the asymmetric response of volatility to positive ('good news') and negative ('bad news') shocks in economic time series the so-called leverage effect. In the past, asymmetric powers of errors in the conditionally heteroskedastic models have been used to capture this effect. Our model is using the gamma difference representation of the generalized Laplace distributions that efficiently models the asymmetry. It has one additional natural parameter, the shape, that is used instead of power in the asymmetric power models to capture the strength of a long-lasting effect of shocks. Some fundamental properties of the model are provided including the formula for covariances and an explicit form for the conditional distribution of 'bad' and 'good' news processes given the past the property that is important for the statistical fitting of the model. Relevant features of volatility models are illustrated using S&P 500 historical data.Keywords: heavy tails, volatility clustering, generalized asymmetric laplace distribution, leverage effect, conditional heteroskedasticity, asymmetric power volatility, GARCH models
Procedia PDF Downloads 3864983 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 1674982 Pharmacological Active Compounds of Sponges and a Gorgonian Coral from the Andaman Sea, Thailand
Authors: Patchara Pedpradab, Kietisak Yoksang, Kosin Pattanamanee
Abstract:
In our ongoing search for pharmacological significant of compounds from marine organisms, we investigated the active constituents of two sponges (Xestospongia sp., Halichondria sp.) and a gorgonian coral (Juncella sp.) from the Andaman Sea, Thailand. Several compounds were isolated from those of marine organisms. A marine sponge, Xestospongia sp. contained an isoqinoline compound namely aureol and cytotoxic thiophenen sesterterpene while Halichondria sp. produced C-28 sterols. The white gorgonian coral, Juncella sp. contained anti-tuberculosis diterpenes namely, junceellin and praelolide. All of the isolated compounds were analyzed by spectroscopic methods, extensively.Keywords: Xestospongia sp., Halichondria sp., gorgonian, Juncella sp. biological activity
Procedia PDF Downloads 3664981 Non-Revenue Water Management in Palestine
Authors: Samah Jawad Jabari
Abstract:
Water is the most important and valuable resource not only for human life but also for all living things on the planet. The water supply utilities should fulfill the water requirement quantitatively and qualitatively. Drinking water systems are exposed to both natural (hurricanes and flood) and manmade hazards (risks) that are common in Palestine. Non-Revenue Water (NRW) is a manmade risk which remains a major concern in Palestine, as the NRW levels are estimated to be at a high level. In this research, Hebron city water distribution network was taken as a case study to estimate and audit the NRW levels. The research also investigated the state of the existing water distribution system in the study area by investigating the water losses and obtained more information on NRW prevention and management practices. Data and information have been collected from the Palestinian Water Authority (PWA) and Hebron Municipality (HM) archive. In addition to that, a questionnaire has been designed and administered by the researcher in order to collect the necessary data for water auditing. The questionnaire also assessed the views of stakeholder in PWA and HM (staff) on the current status of the NRW in the Hebron water distribution system. The important result obtained by this research shows that NRW in Hebron city was high and in excess of 30%. The main factors that contribute to NRW were the inaccuracies in billing volumes, unauthorized consumption, and the method of estimating consumptions through faulty meters. Policy for NRW reduction is available in Palestine; however, it is clear that the number of qualified staff available to carry out the activities related to leak detection is low, and that there is a lack of appropriate technologies to reduce water losses and undertake sufficient system maintenance, which needs to be improved to enhance the performance of the network and decrease the level of NRW losses.Keywords: non-revenue water, water auditing, leak detection, water meters
Procedia PDF Downloads 2984980 CO2 Gas Solubility and Foam Generation
Authors: Chanmoly Or, Kyuro Sasaki, Yuichi Sugai, Masanori Nakano, Motonao Imai
Abstract:
Cold drainage mechanism of oil production is a complicated process which involves with solubility and foaming processes. Laboratory experiments were carried out to investigate the CO2 gas solubility in hexadecane (as light oil) and the effect of depressurization processes on microbubble generation. The experimental study of sensitivity parameters of temperature and pressure on CO2 gas solubility in hexadecane was conducted at temperature of 20 °C and 50 °C and pressure ranged 2.0–7.0 MPa by using PVT (RUSKA Model 2370) apparatus. The experiments of foamy hexadecane were also prepared by depressurizing from saturated pressure of 6.4 MPa and temperature of 50 °C. The experimental results show the CO2 gas solubility in hexadecane linearly increases with increasing pressure. At pressure 4.5 MPa, CO2 gas dissolved in hexadecane 2.5 mmol.g-1 for temperature of 50 °C and 3.5 mmol.g-1 for temperature of 20 °C. The bubbles of foamy hexadecane were observed that most of large bubbles were coalesced shortly whereas the small one keeps presence. The experimental result of foamy hexadecane indicated large depressurization step (∆P) produces high quality of foam with high microbubble distribution.Keywords: CO2 gas solubility, depressurization process, foamy hexadecane, microbubble distribution
Procedia PDF Downloads 4924979 New Coordinate System for Countries with Big Territories
Authors: Mohammed Sabri Ali Akresh
Abstract:
The modern technologies and developments in computer and Global Positioning System (GPS) as well as Geographic Information System (GIS) and total station TS. This paper presents a new proposal for coordinates system by a harmonic equations “United projections”, which have five projections (Mercator, Lambert, Russell, Lagrange, and compound of projection) in one zone coordinate system width 14 degrees, also it has one degree for overlap between zones, as well as two standards parallels for zone from 10 S to 45 S. Also this paper presents two cases; first case is to compare distances between a new coordinate system and UTM, second case creating local coordinate system for the city of Sydney to measure the distances directly from rectangular coordinates using projection of Mercator, Lambert and UTM.Keywords: harmonic equations, coordinate system, projections, algorithms, parallels
Procedia PDF Downloads 4734978 Wear Characteristics of Al Based Composites Fabricated with Nano Silicon Carbide Particles
Authors: Mohammad Reza Koushki Ardestani, Saeed Daneshmand, Mohammad Heydari Vini
Abstract:
In the present study, AA7075/SiO2 composites have been fabricated via liquid metallurgy process. Using the degassing process, the wet ability of the molten aluminum alloys increased which improved the bonding between aluminum matrix and reinforcement (SiO2) particles. AA7075 alloy and SiO2 particles were taken as the base matrix and reinforcements, respectively. Then, contents of 2.5 and 5 wt. % of SiO2 subdivisions were added into the AA7075 matrix. To improve wettability and distribution, reinforcement particles were pre-heated to a temperature of 550°C for each composite sample. A uniform distribution of SiO2 particles was observed through the matrix alloy in the microstructural study. A hardened EN32 steel disc as the counter face was used to evaluate the wear rate pin-on-disc, a wear testing machine containing. The results showed that the wear rate of the AA/SiO2 composites was lesser than that of the monolithic AA7075 samples. Finally, The SEM worn surfaces of samples were investigated.Keywords: Al7075, SiO₂, wear, composites, stir casting
Procedia PDF Downloads 1024977 Integral Form Solutions of the Linearized Navier-Stokes Equations without Deviatoric Stress Tensor Term in the Forward Modeling for FWI
Authors: Anyeres N. Atehortua Jimenez, J. David Lambraño, Juan Carlos Muñoz
Abstract:
Navier-Stokes equations (NSE), which describe the dynamics of a fluid, have an important application on modeling waves used for data inversion techniques as full waveform inversion (FWI). In this work a linearized version of NSE and its variables, neglecting deviatoric terms of stress tensor, is presented. In order to get a theoretical modeling of pressure p(x,t) and wave velocity profile c(x,t), a wave equation of visco-acoustic medium (VAE) is written. A change of variables p(x,t)=q(x,t)h(ρ), is made on the equation for the VAE leading to a well known Klein-Gordon equation (KGE) describing waves propagating in variable density medium (ρ) with dispersive term α^2(x). KGE is reduced to a Poisson equation and solved by proposing a specific function for α^2(x) accounting for the energy dissipation and dispersion. Finally, an integral form solution is derived for p(x,t), c(x,t) and kinematics variables like particle velocity v(x,t), displacement u(x,t) and bulk modulus function k_b(x,t). Further, it is compared this visco-acoustic formulation with another form broadly used in the geophysics; it is argued that this formalism is more general and, given its integral form, it may offer several advantages from the modern parallel computing point of view. Applications to minimize the errors in modeling for FWI applied to oils resources in geophysics are discussed.Keywords: Navier-Stokes equations, modeling, visco-acoustic, inversion FWI
Procedia PDF Downloads 5204976 Effect of Fuel Lean Reburning Process on NOx Reduction and CO Emission
Authors: Changyeop Lee, Sewon Kim
Abstract:
Reburning is a useful technology in reducing nitric oxide through injection of a secondary hydrocarbon fuel. In this paper, an experimental study has been conducted to evaluate the effect of fuel lean reburning on NOx/CO reduction in LNG flame. Experiments were performed in flames stabilized by a co-flow swirl burner, which was mounted at the bottom of the furnace. Tests were conducted using LNG gas as the reburn fuel as well as the main fuel. The effects of reburn fuel fraction and injection manner of the reburn fuel were studied when the fuel lean reburning system was applied. The paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. At steady state, temperature distribution and emission formation in the furnace have been measured and compared. This paper makes clear that in order to decrease both NOx and CO concentrations in the exhaust when the pulsated fuel lean reburning system was adapted, it is important that the control of some factors such as frequency and duty ratio. Also it shows the fuel lean reburning is also effective method to reduce NOx as much as reburning.Keywords: fuel lean reburn, NOx, CO, LNG flame
Procedia PDF Downloads 4254975 Targeting Mineral Resources of the Upper Benue trough, Northeastern Nigeria Using Linear Spectral Unmixing
Authors: Bello Yusuf Idi
Abstract:
The Gongola arm of the Upper Banue Trough, Northeastern Nigeria is predominantly covered by the outcrops of Limestone-bearing rocks in form of Sandstone with intercalation of carbonate clay, shale, basaltic, felsphatic and migmatide rocks at subpixel dimension. In this work, subpixel classification algorithm was used to classify the data acquired from landsat 7 Enhance Thematic Mapper (ETM+) satellite system with the aim of producing fractional distribution image for three most economically important solid minerals of the area: Limestone, Basalt and Migmatide. Linear Spectral Unmixing (LSU) algorithm was used to produce fractional distribution image of abundance of the three mineral resources within a 100Km2 portion of the area. The results show that the minerals occur at different proportion all over the area. The fractional map could therefore serve as a guide to the ongoing reconnaissance for the economic potentiality of the formation.Keywords: linear spectral un-mixing, upper benue trough, gongola arm, geological engineering
Procedia PDF Downloads 3754974 Exploring Exposed Political Economy in Disaster Risk Reduction Efforts in Bangladesh
Authors: Shafiqul Islam, Cordia Chu
Abstract:
Bangladesh is one of the most vulnerable countries to climate related disasters such as flood and cyclone. Exploring from the semi-structured in-depth interviews of 38 stakeholders and literature review, this study examined the public spending distribution process in DRR. This paper demonstrates how the processes of political economy-enclosure, exclusion, encroachment, and entrenchment hinder the Disaster Risk Reduction (DRR) efforts of Department of Disaster Management (DDM) such as distribution of flood centres, cyclone centres and 40 days employment generation programs. Enclosure refers to when DRR projects allocated to less vulnerable areas or expand the roles of influencing actors into the public sphere. Exclusion refers to when DRR projects limit affected people’s access to resources or marginalize particular stakeholders in decision-making activities. Encroachment refers to when allocation of DRR projects and selection of location and issues degrade the environmental affect or contribute to other forms of disaster risk. Entrenchment refers to when DRR projects aggravate the disempowerment of common people worsen the concentrations of wealth and income inequality within a community. In line with United Nations (UN) Sustainable Development Goals (SDGs), Hyogo and Sendai Frameworks, in the case of Bangladesh, DRR policies implemented under the country’s national five-year plan, disaster-related acts and rules. These policies and practices have somehow enabled influential-elites to mobilize and distribute resources through bureaucracies. Exclusionary forms of fund distribution of DRR exist at both the national and local scales. DRR related allocations have encroached through the low land areas development project without consulting local needs. Most severely, DRR related unequal allocations have entrenched social class trapping the backward communities vulnerable to climate related disasters. Planners and practitioners of DRR need to take necessary steps to eliminate the potential risks from the processes of enclosure, exclusion, encroachment, and entrenchment happens in project fund allocations.Keywords: Bangladesh, disaster risk reduction, fund distribution, political economy
Procedia PDF Downloads 1294973 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki
Abstract:
Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol
Procedia PDF Downloads 2284972 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 208