Search results for: sequential units distribution
5977 An Automated Bender Element System Used for S-Wave Velocity Tomography during Model Pile Installation
Authors: Yuxin Wu, Yu-Shing Wang, Zitao Zhang
Abstract:
A high-speed and time-lapse S-wave velocity measurement system has been built up for S-wave tomography in sand. This system is based on bender elements and applied to model pile tests in a tailor-made pressurized chamber to monitor the shear wave velocity distribution during pile installation in sand. Tactile pressure sensors are used parallel together with bender elements to monitor the stress changes during the tests. Strain gages are used to monitor the shaft resistance and toe resistance of pile. Since the shear wave velocity (Vs) is determined by the shear modulus of sand and the shaft resistance of pile is also influenced by the shear modulus of sand around the pile, the purposes of this study are to time-lapse monitor the S-wave velocity distribution change at a certain horizontal section during pile installation and to correlate the S-wave velocity distribution and shaft resistance of pile in sand.Keywords: bender element, pile, shaft resistance, shear wave velocity, tomography
Procedia PDF Downloads 4275976 Effects of Particle Size Distribution on Mechanical Strength and Physical Properties in Engineered Quartz Stone
Authors: Esra Arici, Duygu Olmez, Murat Ozkan, Nurcan Topcu, Furkan Capraz, Gokhan Deniz, Arman Altinyay
Abstract:
Engineered quartz stone is a composite material comprising approximately 90 wt.% fine quartz aggregate with a variety of particle size ranges and `10 wt.% unsaturated polyester resin (UPR). In this study, the objective is to investigate the influence of particle size distribution on mechanical strength and physical properties of the engineered stone slabs. For this purpose, granular quartz with two particle size ranges of 63-200 µm and 100-300 µm were used individually and mixed with a difference in ratios of mixing. The void volume of each granular packing was measured in order to define the amount of filler; quartz powder with the size of less than 38 µm, and UPR required filling inter-particle spaces. Test slabs were prepared using vibration-compression under vacuum. The study reports that both impact strength and flexural strength of samples increased as the mix ratio of the particle size range of 63-200 µm increased. On the other hand, the values of water absorption rate, apparent density and abrasion resistance were not affected by the particle size distribution owing to vacuum compaction. It is found that increasing the mix ratio of the particle size range of 63-200 µm caused the higher porosity. This led to increasing in the amount of the binder paste needed. It is also observed that homogeneity in the slabs was improved with the particle size range of 63-200 µm.Keywords: engineered quartz stone, fine quartz aggregate, granular packing, mechanical strength, particle size distribution, physical properties.
Procedia PDF Downloads 1445975 Short Term Distribution Load Forecasting Using Wavelet Transform and Artificial Neural Networks
Authors: S. Neelima, P. S. Subramanyam
Abstract:
The major tool for distribution planning is load forecasting, which is the anticipation of the load in advance. Artificial neural networks have found wide applications in load forecasting to obtain an efficient strategy for planning and management. In this paper, the application of neural networks to study the design of short term load forecasting (STLF) Systems was explored. Our work presents a pragmatic methodology for short term load forecasting (STLF) using proposed two-stage model of wavelet transform (WT) and artificial neural network (ANN). It is a two-stage prediction system which involves wavelet decomposition of input data at the first stage and the decomposed data with another input is trained using a separate neural network to forecast the load. The forecasted load is obtained by reconstruction of the decomposed data. The hybrid model has been trained and validated using load data from Telangana State Electricity Board.Keywords: electrical distribution systems, wavelet transform (WT), short term load forecasting (STLF), artificial neural network (ANN)
Procedia PDF Downloads 4355974 Influence of Genotype, Explant, and Hormone Treatment on Agrobacterium-Transformation Success in Salix Callus Culture
Authors: Lukas J. Evans, Danilo D. Fernando
Abstract:
Shrub willows (Salix spp.) have many characteristics which make them suitable for a variety of applications such as riparian zone buffers, environmental contaminant sequestration, living snow fences, and biofuel production. In some cases, these functions are limited due to physical or financial obstacles associated with the number of individuals needed to reasonably satisfy that purpose. One way to increase the efficiency of willows is to bioengineer them with the genetic improvements suitable for the desired use. To accomplish this goal, an optimized in vitro transformation protocol via Agrobacterium tumefaciens is necessary to reliably express genes of interest. Therefore, the aim of this study is to observe the influence of tissue culture with different willow cultivars, hormones, and explants on the percentage of calli expressing reporter gene green florescent protein (GFP) to find ideal transformation conditions. Each callus was produced from 1 month old open-pollinated seedlings of three Salix miyabeana cultivars (‘SX61’, ‘WT1’, and ‘WT2’) from three different explants (lamina, petiole, and internodes). Explants were cultured for 1 month on an MS media with different concentrations of 6-Benzylaminopurine (BAP) and 1-Naphthaleneacetic acid (NAA) (No hormones, 1 mg⁻¹L BAP only, 3 mg⁻¹L NAA only, 1 mg⁻¹L BAP and 3 mg⁻¹L NAA, and 3 mg⁻¹L BAP and 1 mg⁻¹L NAA) to produce a callus. Samples were then treated with Agrobacterium tumefaciens at an OD600 of 0.6-0.8 to insert the transgene GFP for 30 minutes, co-cultivated for 72 hours, and selected on the same media type they were cultured on with added 7.5 mg⁻¹L of Hygromycin for 1 week before GFP visualization under a UV dissecting scope. Percentage of GFP expressing calli as well as the average number of fluorescing GFP units per callus were recorded and results were evaluated through an ANOVA test (α = 0.05). The WT1 internode-derived calli on media with 3 mg-1L NAA+1 mg⁻¹L BAP and mg⁻¹L BAP alone produced a significantly higher percentage of GFP expressing calli than each other group (19.1% and 19.4%, respectively). Additionally, The WT1 internode group cultured with 3 mg⁻¹L NAA+1 mg⁻¹L BAP produced an average of 2.89 GFP units per callus while the group cultivated with 1 mg⁻¹L BAP produced an average of 0.84 GFP units per callus. In conclusion, genotype, explant choice, and hormones all play a significant role in increasing successful transformation in willows. Future studies to produce whole callus GFP expression and subsequent plantlet regeneration are necessary for a complete willow transformation protocol.Keywords: agrobacterium, callus, Salix, tissue culture
Procedia PDF Downloads 1235973 Enhancing the Pricing Expertise of an Online Distribution Channel
Authors: Luis N. Pereira, Marco P. Carrasco
Abstract:
Dynamic pricing is a revenue management strategy in which hotel suppliers define, over time, flexible and different prices for their services for different potential customers, considering the profile of e-consumers and the demand and market supply. This means that the fundamentals of dynamic pricing are based on economic theory (price elasticity of demand) and market segmentation. This study aims to define a dynamic pricing strategy and a contextualized offer to the e-consumers profile in order to improve the number of reservations of an online distribution channel. Segmentation methods (hierarchical and non-hierarchical) were used to identify and validate an optimal number of market segments. A profile of the market segments was studied, considering the characteristics of the e-consumers and the probability of reservation a room. In addition, the price elasticity of demand was estimated for each segment using econometric models. Finally, predictive models were used to define rules for classifying new e-consumers into pre-defined segments. The empirical study illustrates how it is possible to improve the intelligence of an online distribution channel system through an optimal dynamic pricing strategy and a contextualized offer to the profile of each new e-consumer. A database of 11 million e-consumers of an online distribution channel was used in this study. The results suggest that an appropriate policy of market segmentation in using of online reservation systems is benefit for the service suppliers because it brings high probability of reservation and generates more profit than fixed pricing.Keywords: dynamic pricing, e-consumers segmentation, online reservation systems, predictive analytics
Procedia PDF Downloads 2345972 Modelling Operational Risk Using Extreme Value Theory and Skew t-Copulas via Bayesian Inference
Authors: Betty Johanna Garzon Rozo, Jonathan Crook, Fernando Moreira
Abstract:
Operational risk losses are heavy tailed and are likely to be asymmetric and extremely dependent among business lines/event types. We propose a new methodology to assess, in a multivariate way, the asymmetry and extreme dependence between severity distributions, and to calculate the capital for Operational Risk. This methodology simultaneously uses (i) several parametric distributions and an alternative mix distribution (the Lognormal for the body of losses and the Generalized Pareto Distribution for the tail) via extreme value theory using SAS®, (ii) the multivariate skew t-copula applied for the first time for operational losses and (iii) Bayesian theory to estimate new n-dimensional skew t-copula models via Markov chain Monte Carlo (MCMC) simulation. This paper analyses a newly operational loss data set, SAS Global Operational Risk Data [SAS OpRisk], to model operational risk at international financial institutions. All the severity models are constructed in SAS® 9.2. We implement the procedure PROC SEVERITY and PROC NLMIXED. This paper focuses in describing this implementation.Keywords: operational risk, loss distribution approach, extreme value theory, copulas
Procedia PDF Downloads 6005971 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation
Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell
Abstract:
Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models
Procedia PDF Downloads 1435970 Relational Attention Shift on Images Using Bu-Td Architecture and Sequential Structure Revealing
Authors: Alona Faktor
Abstract:
In this work, we present a NN-based computational model that can perform attention shifts according to high-level instruction. The instruction specifies the type of attentional shift using explicit geometrical relation. The instruction also can be of cognitive nature, specifying more complex human-human interaction or human-object interaction, or object-object interaction. Applying this approach sequentially allows obtaining a structural description of an image. A novel data-set of interacting humans and objects is constructed using a computer graphics engine. Using this data, we perform systematic research of relational segmentation shifts.Keywords: cognitive science, attentin, deep learning, generalization
Procedia PDF Downloads 1975969 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.Keywords: floods, FLIKE, probability distributions, flood frequency, outlier
Procedia PDF Downloads 4495968 Attitudinal Change: A Major Therapy for Non–Technical Losses in the Nigerian Power Sector
Authors: Fina O. Faithpraise, Effiong O. Obisung, Azele E. Peter, Chris R. Chatwin
Abstract:
This study investigates and identifies consumer attitude as a major influence that results in non-technical losses in the Nigerian electricity supply sector. This discovery is revealed by the combination of quantitative and qualitative research to complete a survey. The dataset employed is a simple random sampling of households using electricity (public power supply), and the number of units chosen is based on statistical power analysis. The units were subdivided into two categories (household with and without electrical meters). The hypothesis formulated was tested and analyzed using a chi-square statistical method. The results obtained shows that the critical value for the household with electrical prepared meter (EPM) was (9.488 < 427.4) and those without electrical prepared meter (EPMn) was (9.488 < 436.1) with a p-value of 0.01%. The analysis demonstrated so far established the real-time position, which shows that the wrong attitude towards handling the electricity supplied (not turning off light bulbs and electrical appliances when not in use within the rooms and outdoors within 12 hours of the day) characterized the non-technical losses in the power sector. Therefore the adoption of efficient lighting attitudes in individual households as recommended by the researcher is greatly encouraged. The results from this study should serve as a model for energy efficiency and use for the improvement of electricity consumption as well as a stable economy.Keywords: attitudinal change, household, non-technical losses, prepared meter
Procedia PDF Downloads 1795967 Comparative Performance of Artificial Bee Colony Based Algorithms for Wind-Thermal Unit Commitment
Authors: P. K. Singhal, R. Naresh, V. Sharma
Abstract:
This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.Keywords: artificial bee colony algorithm, economic dispatch, unit commitment, wind power
Procedia PDF Downloads 3745966 Combined Power Supply at Well Drilling in Extreme Climate Conditions
Authors: V. Morenov, E. Leusheva
Abstract:
Power supplying of well drilling on oil and gas fields at ambient air low temperatures is characterized by increased requirements of electric and heat energy. Power costs for heating of production facilities, technological and living objects may several times exceed drilling equipment electric power consumption. Power supplying of prospecting and exploitation drilling objects is usually done by means of local electric power structures based on diesel power stations. In the meantime, exploitation of oil fields is accompanied by vast quantities of extracted associated petroleum gas, and while developing gas fields there are considerable amounts of natural gas and gas condensate. In this regard implementation of gas-powered self-sufficient power units functioning on produced crude products for power supplying is seen as most potential. For these purposes gas turbines (GT) or gas reciprocating engines (GRE) may be used. In addition gas-powered units are most efficiently used in cogeneration mode - combined heat and power production. Conducted research revealed that GT generate more heat than GRE while producing electricity. One of the latest GT design are microturbines (MT) - devices that may be efficiently exploited in combined heat and power mode. In conditions of ambient air low temperatures and high velocity wind sufficient heat supplying is required for both technological process, specifically for drilling mud heating, and for maintaining comfortable working conditions at the rig. One of the main heat regime parameters are the heat losses. Due to structural peculiarities of the rig most of the heat losses occur at cold air infiltration through the technological apertures and hatchways and heat transition of isolation constructions. Also significant amount of heat is required for working temperature sustaining of the drilling mud. Violation of circulation thermal regime may lead to ice build-up on well surfaces and ice blockages in armature elements. That is why it is important to ensure heating of the drilling mud chamber according to ambient air temperature. Needed heat power will be defined by heat losses of the chamber. Noting heat power required for drilling structure functioning, it is possible to create combined heat and power complex based on MT for satisfying consumer power needs and at the same time lowering power generation costs. As a result, combined power supplying scheme for multiple well drilling utilizing heat of MT flue gases was developed.Keywords: combined heat, combined power, drilling, electric supply, gas-powered units, heat supply
Procedia PDF Downloads 5745965 Anaerobic Co-digestion in Two-Phase TPAD System of Sewage Sludge and Fish Waste
Authors: Rocio López, Miriam Tena, Montserrat Pérez, Rosario Solera
Abstract:
Biotransformation of organic waste into biogas is considered an interesting alternative for the production of clean energy from renewable sources by reducing the volume and organic content of waste Anaerobic digestion is considered one of the most efficient technologies to transform waste into fertilizer and biogas in order to obtain electrical energy or biofuel within the concept of the circular economy. Currently, three types of anaerobic processes have been developed on a commercial scale: (1) single-stage process where sludge bioconversion is completed in a single chamber, (2) two-stage process where the acidogenic and methanogenic stages are separated into two chambers and, finally, (3) temperature-phase sequencing (TPAD) process that combines a thermophilic pretreatment unit prior to mesophilic anaerobic digestion. Two-stage processes can provide hydrogen and methane with easier control of the first and second stage conditions producing higher total energy recovery and substrate degradation than single-stage processes. On the other hand, co-digestion is the simultaneous anaerobic digestion of a mixture of two or more substrates. The technology is similar to anaerobic digestion but is a more attractive option as it produces increased methane yields due to the positive synergism of the mixtures in the digestion medium thus increasing the economic viability of biogas plants. The present study focuses on the energy recovery by anaerobic co-digestion of sewage sludge and waste from the aquaculture-fishing sector. The valorization is approached through the application of a temperature sequential phase process or TPAD technology (Temperature - Phased Anaerobic Digestion). Moreover, two-phase of microorganisms is considered. Thus, the selected process allows the development of a thermophilic acidogenic phase followed by a mesophilic methanogenic phase to obtain hydrogen (H₂) in the first stage and methane (CH₄) in the second stage. The combination of these technologies makes it possible to unify all the advantages of these anaerobic digestion processes individually. To achieve these objectives, a sequential study has been carried out in which the biochemical potential of hydrogen (BHP) is tested followed by a BMP test, which will allow checking the feasibility of the two-stage process. The best results obtained were high total and soluble COD yields (59.8% and 82.67%, respectively) as well as H₂ production rates of 12LH₂/kg SVadded and methane of 28.76 L CH₄/kg SVadded for TPAD.Keywords: anaerobic co-digestion, TPAD, two-phase, BHP, BMP, sewage sludge, fish waste
Procedia PDF Downloads 1545964 Enhanced Magnetic Hyperthermic Efficiency of Ferrite Based Nanoparticles
Authors: J. P. Borah, R. D. Raland
Abstract:
Hyperthermia is one of many techniques used destroys cancerous cell. It uses the physical methods to heat certain organ or tissue delivering an adequate temperature in an appropriate period of time, to the entire tumor volume for achieving optimal therapeutic results. Magnetic Metal ferrites nanoparticles (MFe₂O₄ where M = Mn, Zn, Ni, Co, Mg, etc.) are one of the most potential candidates for hyperthermia due to their tunability, biocompatibility, chemical stability and notable ability to mediate high rate of heat induction. However, to obtain the desirable properties for these applications, it is important to optimize their chemical composition, structure and magnetic properties. These properties are mainly sensitive to cation distribution of tetrahedral and octahedral sites. Among the ferrites, zinc ferrite (ZnFe₂O₄) and Manganese ferrite ((MnFe₂O₄) is one of a strong candidate for hyperthermia application because Mn and zinc have a non-magnetic cation and therefore the magnetic property is determined only by the cation distribution of iron, which provides a better platform to manipulate or tailor the properties. In this talk, influence of doping and surfactant towards cation re-distribution leading to an enhancement of magnetic properties of ferrite nanoparticles will be demonstrated. The efficiency of heat generation in association with the enhanced magnetic property is also well discussed in this talk.Keywords: magnetic nanoparticle, hyperthermia, x-ray diffraction, TEM study
Procedia PDF Downloads 1625963 Neural Networks Underlying the Generation of Neural Sequences in the HVC
Authors: Zeina Bou Diab, Arij Daou
Abstract:
The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird
Procedia PDF Downloads 695962 Size Distribution Effect of InAs/InP Self–Organized Quantum Dots on Optical Properties
Authors: Abdelkader Nouri, M’hamed Bouslama, Faouzi Saidi, Hassan Maaref, Michel Gendry
Abstract:
Self-organized InAs quantum dots (QDs) have been grown on 3,1% InP (110) lattice mismatched substrate by Solid Source Molecular Beam Epitaxy (SSMBE). Stranski-Krastanov mode growth has been used to create self-assembled 3D islands on InAs wetting layer (WL). The optical quality depending on the temperature and power is evaluated. In addition, Atomic Force Microscopy (AFM) images shows inhomogeneous island dots size distribution due to temperature coalescence. The quantum size effect was clearly observed through the spectra photoluminescence (PL) shape.Keywords: AFM, InAs QDs, PL, SSMBE
Procedia PDF Downloads 6855961 Strain DistributionProfiles of EDD Steel at Elevated Temperatures
Authors: Eshwara Prasad Koorapati, R. Raman Goud, Swadesh Kumar Singh
Abstract:
In the present work forming limit diagrams and strain distribution profile diagrams for extra deep drawing steel at room and elevated temperatures have been determined experimentally by conducting stretch forming experiments by using designed and fabricated warm stretch forming tooling setup. With the help of forming Limit Diagrams (FLDs) and strain distribution profile diagrams the formability of Extra Deep Drawing steel has been analyzed and co-related with mechanical properties like strain hardening coefficient (n) and normal anisotropy (r−).Mechanical properties of EDD steel from room temperature to 4500 C were determined and discussed the impact of temperature on the properties like work hardening exponent (n) anisotropy (r-) and strength coefficient of the material. Also, the fractured surfaces after stretching have undergone the some metallurgical investigations and attempt has been made to co-relate with the formability of EDD steel sheets. They are co-related and good agreement with FLDs at various temperatures.Keywords: FLD, micro hardness, strain distribution profile, stretch forming
Procedia PDF Downloads 4185960 The Role of the Rate of Profit Concept in Creating Economic Stability in Islamic Financial Market
Authors: Trisiladi Supriyanto
Abstract:
This study aims to establish a concept of rate of profit on Islamic banking that can create economic justice and stability in the Islamic Financial Market (Banking and Capital Markets). A rate of profit that creates economic justice and stability can be achieved through its role in maintaining the stability of the financial system in which there is an equitable distribution of income and wealth. To determine the role of the rate of profit as the basis of the profit sharing system implemented in the Islamic financial system, we can see the connection of rate of profit in creating financial stability, especially in the asset-liability management of financial institutions that generate a stable net margin or the rate of profit that is not affected by the ups and downs of the market risk factors, including indirect effect on interest rates. Furthermore, Islamic financial stability can be seen from the role of the rate of profit on the stability of the Islamic financial assets value that are measured from the Islamic financial asset price volatility in the Islamic Bond Market in the Capital Market.Keywords: economic justice, equitable distribution of income, equitable distribution of wealth, rate of profit, stability in the financial system
Procedia PDF Downloads 3125959 First Order Moment Bounds on DMRL and IMRL Classes of Life Distributions
Authors: Debasis Sengupta, Sudipta Das
Abstract:
The class of life distributions with decreasing mean residual life (DMRL) is well known in the field of reliability modeling. It contains the IFR class of distributions and is contained in the NBUE class of distributions. While upper and lower bounds of the reliability distribution function of aging classes such as IFR, IFRA, NBU, NBUE, and HNBUE have discussed in the literature for a long time, there is no analogous result available for the DMRL class. We obtain the upper and lower bounds for the reliability function of the DMRL class in terms of first order finite moment. The lower bound is obtained by showing that for any fixed time, the minimization of the reliability function over the class of all DMRL distributions with a fixed mean is equivalent to its minimization over a smaller class of distribution with a special form. Optimization over this restricted set can be made algebraically. Likewise, the maximization of the reliability function over the class of all DMRL distributions with a fixed mean turns out to be a parametric optimization problem over the class of DMRL distributions of a special form. The constructive proofs also establish that both the upper and lower bounds are sharp. Further, the DMRL upper bound coincides with the HNBUE upper bound and the lower bound coincides with the IFR lower bound. We also prove that a pair of sharp upper and lower bounds for the reliability function when the distribution is increasing mean residual life (IMRL) with a fixed mean. This result is proved in a similar way. These inequalities fill a long-standing void in the literature of the life distribution modeling.Keywords: DMRL, IMRL, reliability bounds, hazard functions
Procedia PDF Downloads 3965958 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor
Abstract:
Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.Keywords: foot disorder, machine learning, neural network, pes planus
Procedia PDF Downloads 3585957 Effect of Depth on the Distribution of Zooplankton in Wushishi Lake Minna, Niger State, Nigeria
Authors: Adamu Zubairu Mohammed, Fransis Oforum Arimoro, Salihu Maikudi Ibrahim, Y. I. Auta, T. I. Arowosegbe, Y. Abdullahi
Abstract:
The present study was conducted to evaluate the effect of depth on the distribution of zooplankton and some physicochemical parameters in Tungan Kawo Lake (Wushishi dam). Water and zooplankton samples were collected from the surface, 3.0 meters deep and 6.0 meters deep, for a period of 24 hours for six months. Standard procedures were adopted for the determination of physicochemical parameters. Results have shown significant differences in the pH, DO, BOD Hardness, Na, and Mg. A total of 1764 zooplankton were recorded, comprising 35 species, with cladocera having 18 species (58%), 14 species of copepoda (41%), 3 species of diptera (1.0%). Results show that more of the zooplankton were recorded in the 3.0 meters-deep region compared to the two other depts and a significant difference was observed in the distribution of Ceriodaphnia dubia, Daphnia laevis, and Leptodiaptomus coloradensis. Though the most abundant zooplankton was recorded in the 3.0 meters deep, Leptodiaptomus coloradesnsis, which was observed in the 6.0 meters deep as the most individual observed, this was followed by Daphnia laevis. Canonical correspondence analysis between physicochemical parameters and the zooplankton indicated a good relationship in the Lake. Ceriodaphnia dubia was found to have a good association with oxygen, sodium, and potassium, while Daphnia laevis and Leptodiaptomus coloradensis are in good relationship with magnesium and phosphorus. It was generally observed that this depth does not have much influence on the distribution of zooplankton in Wushishi Lake.Keywords: zooplankton, standard procedures, canonical correspondence analysis, Wushishi, canonical, physicochemical parameter
Procedia PDF Downloads 885956 Detecting the Edge of Multiple Images in Parallel
Authors: Prakash K. Aithal, U. Dinesh Acharya, Rajesh Gopakumar
Abstract:
Edge is variation of brightness in an image. Edge detection is useful in many application areas such as finding forests, rivers from a satellite image, detecting broken bone in a medical image etc. The paper discusses about finding edge of multiple aerial images in parallel .The proposed work tested on 38 images 37 colored and one monochrome image. The time taken to process N images in parallel is equivalent to time taken to process 1 image in sequential. The proposed method achieves pixel level parallelism as well as image level parallelism.Keywords: edge detection, multicore, gpu, opencl, mpi
Procedia PDF Downloads 4765955 HPSEC Application as a New Indicator of Nitrification Occurrence in Water Distribution Systems
Authors: Sina Moradi, Sanly Liu, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Soha Habibi, Rose Amal
Abstract:
In recent years, chloramine has been widely used for both primary and secondary disinfection. However, a major concern with the use of chloramine as a secondary disinfectant is the decay of chloramine and nitrification occurrence. The management of chloramine decay and the prevention of nitrification are critical for water utilities managing chloraminated drinking water distribution systems. The detection and monitoring of nitrification episodes is usually carried out through measuring certain water quality parameters, which are commonly referred to as indicators of nitrification. The approach taken in this study was to collect water samples from different sites throughout a drinking water distribution systems, Tailem Bend – Keith (TBK) in South Australia, and analyse the samples by high performance size exclusion chromatography (HPSEC). We investigated potential association between the water qualities from HPSEC analysis with chloramine decay and/or nitrification occurrence. MATLAB 8.4 was used for data processing of HPSEC data and chloramine decay. An increase in the absorbance signal of HPSEC profiles at λ=230 nm between apparent molecular weights of 200 to 1000 Da was observed at sampling sites that experienced rapid chloramine decay and nitrification while its absorbance signal of HPSEC profiles at λ=254 nm decreased. An increase in absorbance at λ=230 nm and AMW < 500 Da was detected for Raukkan CT (R.C.T), a location that experienced nitrification and had significantly lower chloramine residual (<0.1 mg/L). This increase in absorbance was not detected in other sites that did not experience nitrification. Moreover, the UV absorbance at 254 nm of the HPSEC spectra was lower at R.C.T. than other sites. In this study, a chloramine residual index (C.R.I) was introduced as a new indicator of chloramine decay and nitrification occurrence, and is defined based on the ratio of area underneath the HPSEC spectra at two different wavelengths of 230 and 254 nm. The C.R.I index is able to indicate DS sites that experienced nitrification and rapid chloramine loss. This index could be useful for water treatment and distribution system managers to know if nitrification is occurring at a specific location in water distribution systems.Keywords: nitrification, HPSEC, chloramine decay, chloramine residual index
Procedia PDF Downloads 2985954 Influence of Propeller Blade Lift Distribution on Whirl Flutter Stability Characteristics
Authors: J. Cecrdle
Abstract:
This paper deals with the whirl flutter of the turboprop aircraft structures. It is focused on the influence of the blade lift span-wise distribution on the whirl flutter stability. Firstly it gives the overall theoretical background of the whirl flutter phenomenon. After that the propeller blade forces solution and the options of the blade lift modelling are described. The problem is demonstrated on the example of a twin turboprop aircraft structure. There are evaluated the influences with respect to the propeller aerodynamic derivatives and finally the influences to the whirl flutter speed and the whirl flutter margin respectively.Keywords: aeroelasticity, flutter, propeller blade force, whirl flutter
Procedia PDF Downloads 5345953 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics
Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh
Abstract:
In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.Keywords: bond ball mill, population balance model, product size distribution, vertical stirred mill
Procedia PDF Downloads 2915952 Numerical Simulation of Solar Reactor for Water Disinfection
Authors: A. Sebti Bouzid, S. Igoud, L. Aoudjit, H. Lebik
Abstract:
Mathematical modeling and numerical simulation have emerged over the past two decades as one of the key tools for design and optimize performances of physical and chemical processes intended to water disinfection. Water photolysis is an efficient and economical technique to reduce bacterial contamination. It exploits the germicidal effect of solar ultraviolet irradiation to inactivate pathogenic microorganisms. The design of photo-reactor operating in continuous disinfection system, required tacking in account the hydrodynamic behavior of water in the reactor. Since the kinetic of disinfection depends on irradiation intensity distribution, coupling the hydrodynamic and solar radiation distribution is of crucial importance. In this work we propose a numerical simulation study for hydrodynamic and solar irradiation distribution in a tubular photo-reactor. We have used the Computational Fluid Dynamic code Fluent under the assumption of three-dimensional incompressible flow in unsteady turbulent regimes. The results of simulation concerned radiation, temperature and velocity fields are discussed and the effect of inclination angle of reactor relative to the horizontal is investigated.Keywords: solar water disinfection, hydrodynamic modeling, solar irradiation modeling, CFD Fluent
Procedia PDF Downloads 3485951 Study of Aging Behavior of Parallel-Series Connection Batteries
Authors: David Chao, John Lai, Alvin Wu, Carl Wang
Abstract:
For lithium-ion batteries with multiple cell configurations, some use scenarios can cause uneven aging effects to each cell within the battery because of uneven current distribution. Hence the focus of the study is to explore the aging effect(s) on batteries with different construction designs. In order to systematically study the influence of various factors in some key battery configurations, a detailed analysis of three key battery construction factors is conducted. And those key factors are (1) terminal position; (2) cell alignment matrix; and (3) interconnect resistance between cells. In this study, the 2S2P circuitry has been set as a model multi-cell battery to set up different battery samples, and the aging behavior is studied by a cycling test to analyze the current distribution and recoverable capacity. According to the outcome of aging tests, some key findings are: (I) different cells alignment matrices can have an impact on the cycle life of the battery; (II) symmetrical structure has been identified as a critical factor that can influence the battery cycle life, and unbalanced resistance can lead to inconsistent cell aging status; (III) the terminal position has been found to contribute to the uneven current distribution, that can cause an accelerated battery aging effect; and (IV) the internal connection resistance increase can actually result in cycle life increase; however, it is noteworthy that such increase in cycle life is accompanied by a decline in battery performance. In summary, the key findings from the study can help to identify the key aging factor of multi-cell batteries, and it can be useful to effectively improve the accuracy of battery capacity predictions.Keywords: multiple cells battery, current distribution, battery aging, cell connection
Procedia PDF Downloads 775950 Observations on the Eastern Red Sea Elasmobranchs: Data on Their Distribution and Ecology
Authors: Frappi Sofia, Nicolas Pilcher, Sander DenHaring, Royale Hardenstine, Luis Silva, Collin Williams, Mattie Rodrigue, Vincent Pieriborne, Mohammed Qurban, Carlos M. Duarte
Abstract:
Nowadays, elasmobranch populations are disappearing at a dangerous rate, mainly due to overexploitation, extensive fisheries, as well as climate change. The decline of these species can trigger a cascade effect, which may eventually lead to detrimental impacts on local ecosystems. The Elasmobranch in the Red Sea is facing one of the highest risks of extinction, mainly due to unregulated fisheries activities. Thus, it is of paramount importance to assess their current distribution and unveil their environmental preferences in order to improve conservation measures. Important data have been collected throughout the whole red Sea during the Red Sea Decade Expedition (RSDE) to achieve this goal. Elasmobranch sightings were gathered through the use of submarines, remotely operated underwater vehicles (ROV), scuba diving operations, and helicopter surveys. Over a period of 5 months, we collected 891 sightings, 52 with submarines, 138 with the ROV, 67 with the scuba diving teams, and 634 from helicopters. In total, we observed 657 and 234 individuals from the superorder Batoidea and Selachimorpha, respectively. The most common shark encountered was Iago omanensis, a deep-water shark of the order Carcharhiniformes. To each sighting, data on temperature, salinity density, and dissolved oxygen were integrated to reveal favorable conditions for each species. Additionally, an extensive literature review on elasmobranch research in the Eastern Red Sea has been carried out in order to obtain more data on local populations and to be able to highlight patterns of their distribution.Keywords: distribution, elasmobranchs, habitat, rays, red sea, sharks
Procedia PDF Downloads 845949 Residents' Incomes in Local Government Unit as the Major Determinant of Local Budget Transparency in Croatia: Panel Data Analysis
Authors: Katarina Ott, Velibor Mačkić, Mihaela Bronić, Branko Stanić
Abstract:
The determinants of national budget transparency have been widely discussed in the literature, while research on determinants of local budget transparency are scarce and empirically inconclusive, particularly in the new, fiscally centralised, EU member states. To fill the gap, we combine two strands of the literature: that concerned with public administration and public finance, shedding light on the economic and financial determinants of local budget transparency, and that on the political economy of transparency (principal agent theory), covering the relationships among politicians and between politicians and voters. Our main hypothesis states that variables describing residents’ capacity have a greater impact on local budget transparency than variables indicating the institutional capacity of local government units (LGUs). Additional subhypotheses test the impact of each variable analysed on local budget transparency. We address the determinants of local budget transparency in Croatia, measured by the number of key local budget documents published on the LGUs’ websites. By using a data set of 128 cities and 428 municipalities over the 2015-2017 period and by applying panel data analysis based on Poisson and negative binomial distribution, we test our main hypothesis and sub-hypotheses empirically. We measure different characteristics of institutional and residents’ capacity for each LGU. Age, education and ideology of the mayor/municipality head, political competition indicators, number of employees, current budget revenues and direct debt per capita have been used as a measure of the institutional capacity of LGU. Residents’ capacity in each LGU has been measured through the numbers of citizens and their average age as well as by average income per capita. The most important determinant of local budget transparency is average residents' income per capita at both city and municipality level. The results are in line with most previous research results in fiscally decentralised countries. In the context of a fiscally centralised country with numerous small LGUs, most of whom have low administrative and fiscal capacity, this has a theoretical rationale in the legitimacy and principal-agent theory (opportunistic motives of the incumbent). The result is robust and significant, but because of the various other results that change between city and municipality levels (e.g. ideology and political competition), there is a need for further research (both on identifying other determinates and/or methods of analysis). Since in Croatia the fiscal capacity of a LGU depends heavily on the income of its residents, units with higher per capita incomes in many cases have also higher budget revenues allowing them to engage more employees and resources. In addition, residents’ incomes might be also positively associated with local budget transparency because of higher citizen demand for such transparency. Residents with higher incomes expect more public services and have more access to and experience in using the Internet, and will thus typically demand more budget information on the LGUs’ websites.Keywords: budget transparency, count data, Croatia, local government, political economy
Procedia PDF Downloads 1835948 An Evaluation Model for Automatic Map Generalization
Authors: Quynhan Tran, Hong Fan, Quockhanh Pham
Abstract:
Automatic map generalization is a well-known problem in cartography. The development of map generalization research accompanied the development of cartography. The traditional map is plotted manually by cartographic experts. The paper studies none-scale automation generalization of resident polygons and house marker symbol, proposes methodology to evaluate the result maps based on minimal spanning tree. In this paper, the minimal spanning tree before and after map generalization is compared to evaluate whether the generalization result maintain the geographical distribution of features. The minimal spanning tree in vector format is firstly converted into a raster format and the grid size is 2mm (distance on the map). The statistical number of matching grid before and after map generalization and the ratio of overlapping grid to the total grids is calculated. Evaluation experiments are conduct to verify the results. Experiments show that this methodology can give an objective evaluation for the feature distribution and give specialist an hand while they evaluate result maps of none-scale automation generalization with their eyes.Keywords: automatic cartography generalization, evaluation model, geographic feature distribution, minimal spanning tree
Procedia PDF Downloads 634