Search results for: pore size distribution
9610 Influence of the Compression Force and Powder Particle Size on Some Physical Properties of Date (Phoenix dactylifera) Tablets
Authors: Djemaa Megdoud, Messaoud Boudaa, Fatima Ouamrane, Salem Benamara
Abstract:
In recent years, the compression of date (Phoenix dactylifera L.) fruit powders (DP) to obtain date tablets (DT) has been suggested as a promising form of valorization of non commercial valuable date fruit (DF) varieties. To further improve and characterize DT, the present study aims to investigate the influence of the DP particle size and compression force on some physical properties of DT. The results show that independently of particle size, the hardness (y) of tablets increases with the increase of the compression force (x) following a logarithmic law (y = a ln (bx) where a and b are the constants of model). Further, a full factorial design (FFD) at two levels, applied to investigate the erosion %, reveals that the effects of time and particle size are the same in absolute value and they are beyond the effect of the compression. Regarding the disintegration time, the obtained results also by means of a FFD show that the effect of the compression force exceeds 4 times that of the DP particle size. As final stage, the color parameters in the CIELab system of DT immediately after their obtaining are differently influenced by the size of the initial powder.Keywords: powder, tablets, date (Phoenix dactylifera L.), hardness, erosion, disintegration time, color
Procedia PDF Downloads 4189609 Effect of Fault Depth on Near-Fault Peak Ground Velocity
Authors: Yanyan Yu, Haiping Ding, Pengjun Chen, Yiou Sun
Abstract:
Fault depth is an important parameter to be determined in ground motion simulation, and peak ground velocity (PGV) demonstrates good application prospect. Using numerical simulation method, the variations of distribution and peak value of near-fault PGV with different fault depth were studied in detail, and the reason of some phenomena were discussed. The simulation results show that the distribution characteristics of PGV of fault-parallel (FP) component and fault-normal (FN) component are distinctly different; the value of PGV FN component is much larger than that of FP component. With the increase of fault depth, the distribution region of the FN component strong PGV moves forward along the rupture direction, while the strong PGV zone of FP component becomes gradually far away from the fault trace along the direction perpendicular to the strike. However, no matter FN component or FP component, the strong PGV distribution area and its value are both quickly reduced with increased fault depth. The results above suggest that the fault depth have significant effect on both FN component and FP component of near-fault PGV.Keywords: fault depth, near-fault, PGV, numerical simulation
Procedia PDF Downloads 3389608 Prediction of the Torsional Vibration Characteristics of a Rotor-Shaft System Using Its Scale Model and Scaling Laws
Authors: Jia-Jang Wu
Abstract:
This paper presents the scaling laws that provide the criteria of geometry and dynamic similitude between the full-size rotor-shaft system and its scale model, and can be used to predict the torsional vibration characteristics of the full-size rotor-shaft system by manipulating the corresponding data of its scale model. The scaling factors, which play fundamental roles in predicting the geometry and dynamic relationships between the full-size rotor-shaft system and its scale model, for torsional free vibration problems between scale and full-size rotor-shaft systems are firstly obtained from the equation of motion of torsional free vibration. Then, the scaling factor of external force (i.e., torque) required for the torsional forced vibration problems is determined based on the Newton’s second law. Numerical results show that the torsional free and forced vibration characteristics of a full-size rotor-shaft system can be accurately predicted from those of its scale models by using the foregoing scaling factors. For this reason, it is believed that the presented approach will be significant for investigating the relevant phenomenon in the scale model tests.Keywords: torsional vibration, full-size model, scale model, scaling laws
Procedia PDF Downloads 3869607 Distribution Patterns of Trace Metals in Soils of Gbongan-Odeyinka-Orileowu Area, Southwestern Nigeria
Authors: T. A. Adesiyan, J. A. Adekoya A. Akinlua, N. Torto
Abstract:
One hundred and eighty six in situ soil samples of the B–horizon were collected around Gbongan–Odeyinka-Orileowu area, southwestern Nigeria, delineated by longitude 4°15l and 4°30l and latitude 7°14l and 7°31 for a reconnaissance geochemical soil survey. The objective was to determine the distribution pattern of some trace metals in the area with a view to discovering any indication of metallic mineralization. The samples were air–dried and sieved to obtain the minus 230 µ fractions which were used for pH determinations and subjected to hot aqua regia acid digestion. The solutions obtained were analyzed for Ag, As, Au, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Sn, and Zn using atomic absorption spectrometric methods. The resulting data were subjected to simple statistical treatment and used in preparing distribution maps of the elements. With these, the spatial distributions of the elements in the area were discussed. The pH of the soils range from 4.70 to 7.59 and this reflects the geochemical distribution patterns of trace metals in the area. The spatial distribution maps of the elements showed similarity in the distributions of Co, Cr, Fe, Ni, Mn and Pb. This suggests close associations between these elements none of which showed any significant anomaly in the study. The associations might be due to the scavenging actions of Fe–Mn oxides on the elements. Only Ag, Au and Sn on one hand and Zn on the other hand showed significant anomalies, which are thought to be due to mineralization and anthropogenic activities respectively.Keywords: distribution, metals, Gbongan, Nigeria, mineralization anthropogenic
Procedia PDF Downloads 3139606 Effects of Compensation on Distribution System Technical Losses
Authors: B. Kekezoglu, C. Kocatepe, O. Arikan, Y. Hacialiefendioglu, G. Ucar
Abstract:
One of the significant problems of energy systems is to supply economic and efficient energy to consumers. Therefore studies has been continued to reduce technical losses in the network. In this paper, the technical losses analyzed for a portion of European side of Istanbul MV distribution network for different compensation scenarios by considering real system and load data and results are presented. Investigated system is modeled with CYME Power Engineering Software and optimal capacity placement has been proposed to minimize losses.Keywords: distribution system, optimal capacitor placement, reactive power compensation, technical losses
Procedia PDF Downloads 6609605 2D Numerical Modeling for Induced Current Distribution in Soil under Lightning Impulse Discharge
Authors: Fawwaz Eniola Fajingbesi, Nur Shahida Midia, Elsheikh M. A. Elsheikh, Siti Hajar Yusoff
Abstract:
Empirical analysis of lightning related phenomena in real time is extremely dangerous due to the relatively high electric discharge involved. Hence, design and optimization of efficient grounding systems depending on real time empirical methods are impeded. Using numerical methods, the dynamics of complex systems could be modeled hence solved as sets of linear and non-linear systems . In this work, the induced current distribution as lightning strike traverses the soil have been numerically modeled in a 2D axial-symmetry and solved using finite element method (FEM) in COMSOL Multiphysics 5.2 AC/DC module. Stratified and non- stratified electrode system were considered in the solved model and soil conductivity (σ) varied between 10 – 58 mS/m. The result discussed therein were the electric field distribution, current distribution and soil ionization phenomena. It can be concluded that the electric field and current distribution is influenced by the injected electric potential and the non-linearity in soil conductivity. The result from numerical calculation also agrees with previously laboratory scale empirical results.Keywords: current distribution, grounding systems, lightning discharge, numerical model, soil conductivity, soil ionization
Procedia PDF Downloads 3099604 Numerical Study of Partial Penetration of PVDs In Soft Clay Soils Treatment Along With Surcharge Preloading (Bangkok Airport Case Study)
Authors: Mohammad Mehdi Pardsouie, Mehdi Mokhberi, Seyed Mohammad Ali Zomorodian, Seyed Alireza Nasehi
Abstract:
One of the challenging parts of every project, including prefabricated vertical drains (PVDs), is the determination of the depth of installation and its configuration. In this paper, Geostudio 2018 was used for modeling and verification of the full-scale test embankments (TS1, TS2, and TS3), which were constructed to study the effectiveness of PVDs for accelerating the consolidation and dissipation of the excess pore-pressures resulting from fill placement at Bangkok airport. Different depths and scenarios were modeled and the results were compared and analyzed. Since the ultimate goal is attaining pre-determined settlement, the settlement curve under soil embankment was used for the investigation of the results. It was shown that nearly in all cases, the same results and efficiency might be obtained by partial depth installation of PVDs instead of complete full constant length installation. However, it should be mentioned that because of distinct soil characteristics of clay soils and layers properties of any project, further investigation of full-scale test embankments and modeling is needed prior to finalizing the ultimate design by competent geotechnical consultants.Keywords: partial penetration, surcharge preloading, excess pore water pressure, Bangkok test embankments
Procedia PDF Downloads 1929603 VaR or TCE: Explaining the Preferences of Regulators
Authors: Silvia Faroni, Olivier Le Courtois, Krzysztof Ostaszewski
Abstract:
While a lot of research concentrates on the merits of VaR and TCE, which are the two most classic risk indicators used by financial institutions, little has been written on explaining why regulators favor the choice of VaR or TCE in their set of rules. In this paper, we investigate the preferences of regulators with the aim of understanding why, for instance, a VaR with a given confidence level is ultimately retained. Further, this paper provides equivalence rules that explain how a given choice of VaR can be equivalent to a given choice of TCE. Then, we introduce a new risk indicator that extends TCE by providing a more versatile weighting of the constituents of probability distribution tails. All of our results are illustrated using the generalized Pareto distribution.Keywords: generalized pareto distribution, generalized tail conditional expectation, regulator preferences, risk measure
Procedia PDF Downloads 1639602 The Generalized Pareto Distribution as a Model for Sequential Order Statistics
Authors: Mahdy Esmailian, Mahdi Doostparast, Ahmad Parsian
Abstract:
In this article, sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered. Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data. Necessary conditions for existence and uniqueness of the derived ML estimates are given. Due to complexity in the proposed likelihood function, a useful re-parametrization is suggested. For illustrative purposes, a Monte Carlo simulation study is conducted and an illustrative example is analysed.Keywords: bayesian estimation, generalized pareto distribution, maximum likelihood estimation, sequential order statistics
Procedia PDF Downloads 4999601 Proposed Fault Detection Scheme on Low Voltage Distribution Feeders
Authors: Adewusi Adeoluwawale, Oronti Iyabosola Busola, Akinola Iretiayo, Komolafe Olusola Aderibigbe
Abstract:
The complex and radial structure of the low voltage distribution network (415V) makes it vulnerable to faults which are due to system and the environmental related factors. Besides these, the protective scheme employed on the low voltage network which is the fuse cannot be monitored remotely such that in the event of sustained fault, the utility will have to rely solely on the complaint brought by customers for loss of supply and this tends to increase the length of outages. A microcontroller based fault detection scheme is hereby developed to detect low voltage and high voltage fault conditions which are common faults on this network. Voltages below 198V and above 242V on the distribution feeders are classified and detected as low voltage and high voltages respectively. Results shows that the developed scheme produced a good response time in the detection of these faults.Keywords: fault detection, low voltage distribution feeders, outage times, sustained faults
Procedia PDF Downloads 5329600 Sales Patterns Clustering Analysis on Seasonal Product Sales Data
Authors: Soojin Kim, Jiwon Yang, Sungzoon Cho
Abstract:
As a seasonal product is only in demand for a short time, inventory management is critical to profits. Both markdowns and stockouts decrease the return on perishable products; therefore, researchers have been interested in the distribution of seasonal products with the aim of maximizing profits. In this study, we propose a data-driven seasonal product sales pattern analysis method for individual retail outlets based on observed sales data clustering; the proposed method helps in determining distribution strategies.Keywords: clustering, distribution, sales pattern, seasonal product
Procedia PDF Downloads 5889599 Network Coding with Buffer Scheme in Multicast for Broadband Wireless Network
Authors: Gunasekaran Raja, Ramkumar Jayaraman, Rajakumar Arul, Kottilingam Kottursamy
Abstract:
Broadband Wireless Network (BWN) is the promising technology nowadays due to the increased number of smartphones. Buffering scheme using network coding considers the reliability and proper degree distribution in Worldwide interoperability for Microwave Access (WiMAX) multi-hop network. Using network coding, a secure way of transmission is performed which helps in improving throughput and reduces the packet loss in the multicast network. At the outset, improved network coding is proposed in multicast wireless mesh network. Considering the problem of performance overhead, degree distribution makes a decision while performing buffer in the encoding / decoding process. Consequently, BuS (Buffer Scheme) based on network coding is proposed in the multi-hop network. Here the encoding process introduces buffer for temporary storage to transmit packets with proper degree distribution. The simulation results depend on the number of packets received in the encoding/decoding with proper degree distribution using buffering scheme.Keywords: encoding and decoding, buffer, network coding, degree distribution, broadband wireless networks, multicast
Procedia PDF Downloads 3929598 Velocity Distribution in Density Currents Flowing over Rough Beds
Authors: Reza Nasrollahpour, Mohamad Hidayat Bin Jamal, Zulhilmi Bin Ismail
Abstract:
Density currents are generated when the fluid of one density is released into another fluid with a different density. These currents occur in a variety of natural and man-made environments, and this emphasises the importance of studying them. In most practical cases, the density currents flow over the surfaces which are not plane; however, there have been limited investigations in this regard. This study uses laboratory experiments to analyse the influence of bottom roughness on the velocity distribution within these dense underflows. The currents are analysed over a plane surface and three different configurations of beam-roughened beds. The velocity profiles are collected using Acoustic Doppler Velocimetry technique, and the distribution of velocity within these currents is formulated for the tested beds. The results indicate that the empirical power and Gaussian relations can describe the velocity distribution in the inner and outer regions of the profiles, respectively. Moreover, it is found that the bottom roughness is the primary controlling parameter in the inner region.Keywords: density currents, velocity profiles, Acoustic Doppler Velocimeter, bed roughness
Procedia PDF Downloads 1769597 The Modality of Multivariate Skew Normal Mixture
Authors: Bader Alruwaili, Surajit Ray
Abstract:
Finite mixtures are a flexible and powerful tool that can be used for univariate and multivariate distributions, and a wide range of research analysis has been conducted based on the multivariate normal mixture and multivariate of a t-mixture. Determining the number of modes is an important activity that, in turn, allows one to determine the number of homogeneous groups in a population. Our work currently being carried out relates to the study of the modality of the skew normal distribution in the univariate and multivariate cases. For the skew normal distribution, the aims are associated with studying the modality of the skew normal distribution and providing the ridgeline, the ridgeline elevation function, the $\Pi$ function, and the curvature function, and this will be conducive to an exploration of the number and location of mode when mixing the two components of skew normal distribution. The subsequent objective is to apply these results to the application of real world data sets, such as flow cytometry data.Keywords: mode, modality, multivariate skew normal, finite mixture, number of mode
Procedia PDF Downloads 4839596 Point Estimation for the Type II Generalized Logistic Distribution Based on Progressively Censored Data
Authors: Rana Rimawi, Ayman Baklizi
Abstract:
Skewed distributions are important models that are frequently used in applications. Generalized distributions form a class of skewed distributions and gain widespread use in applications because of their flexibility in data analysis. More specifically, the Generalized Logistic Distribution with its different types has received considerable attention recently. In this study, based on progressively type-II censored data, we will consider point estimation in type II Generalized Logistic Distribution (Type II GLD). We will develop several estimators for its unknown parameters, including maximum likelihood estimators (MLE), Bayes estimators and linear estimators (BLUE). The estimators will be compared using simulation based on the criteria of bias and Mean square error (MSE). An illustrative example of a real data set will be given.Keywords: point estimation, type II generalized logistic distribution, progressive censoring, maximum likelihood estimation
Procedia PDF Downloads 1929595 An Investigation on Electric Field Distribution around 380 kV Transmission Line for Various Pylon Models
Authors: C. F. Kumru, C. Kocatepe, O. Arikan
Abstract:
In this study, electric field distribution analyses for three pylon models are carried out by a Finite Element Method (FEM) based software. Analyses are performed in both stationary and time domains to observe instantaneous values along with the effective ones. Considering the results of the study, different line geometries is considerably affecting the magnitude and distribution of electric field although the line voltages are the same. Furthermore, it is observed that maximum values of instantaneous electric field obtained in time domain analysis are quite higher than the effective ones in stationary mode. In consequence, electric field distribution analyses should be individually made for each different line model and the limit exposure values or distances to residential buildings should be defined according to the results obtained.Keywords: electric field, energy transmission line, finite element method, pylon
Procedia PDF Downloads 7189594 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions
Authors: Chaitanya Varma, Arpan Mehar
Abstract:
The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.Keywords: highway, mixed traffic flow, modeling, operating speed
Procedia PDF Downloads 4569593 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index
Authors: Todd Zhou, Mikhail Yurochkin
Abstract:
Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index
Procedia PDF Downloads 1179592 Non-Parametric Regression over Its Parametric Couterparts with Large Sample Size
Authors: Jude Opara, Esemokumo Perewarebo Akpos
Abstract:
This paper is on non-parametric linear regression over its parametric counterparts with large sample size. Data set on anthropometric measurement of primary school pupils was taken for the analysis. The study used 50 randomly selected pupils for the study. The set of data was subjected to normality test, and it was discovered that the residuals are not normally distributed (i.e. they do not follow a Gaussian distribution) for the commonly used least squares regression method for fitting an equation into a set of (x,y)-data points using the Anderson-Darling technique. The algorithms for the nonparametric Theil’s regression are stated in this paper as well as its parametric OLS counterpart. The use of a programming language software known as “R Development” was used in this paper. From the analysis, the result showed that there exists a significant relationship between the response and the explanatory variable for both the parametric and non-parametric regression. To know the efficiency of one method over the other, the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) are used, and it is discovered that the nonparametric regression performs better than its parametric regression counterparts due to their lower values in both the AIC and BIC. The study however recommends that future researchers should study a similar work by examining the presence of outliers in the data set, and probably expunge it if detected and re-analyze to compare results.Keywords: Theil’s regression, Bayesian information criterion, Akaike information criterion, OLS
Procedia PDF Downloads 2989591 A Study on the Determinants of Earnings Response Coefficient in an Emerging Market
Authors: Bita Mashayekhi, Zeynab Lotfi Aghel
Abstract:
The determinants of Earnings Response Coefficient (ERC), including firm size, earnings growth, and earnings persistence are studied in this research. These determinants are supposed to be moderator variables that affect ERC and Return Response Coefficient. The research sample contains 82 Iranian listed companies in Tehran Stock Exchange (TSE) from 2001 to 2012. Gathered data have been processed by EVIEWS Software. Results show a significant positive relation between firm size and ERC, and also between earnings growth and ERC; however, there is no significant relation between earnings persistence and ERC. Also, the results show that ERC will be increased by firm size and earnings growth, but there is no relation between earnings persistence and ERC.Keywords: earnings response coefficient (ERC), return response coefficient (RRC), firm size, earnings growth, earnings persistence
Procedia PDF Downloads 3199590 Dynamic Distribution Calibration for Improved Few-Shot Image Classification
Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran
Abstract:
Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.Keywords: deep learning, computer vision, image classification, few-shot learning, threshold
Procedia PDF Downloads 549589 Microfiltration of the Sugar Refinery Wastewater Using Ceramic Membrane with Kenics Static Mixer
Authors: Zita Šereš, Ljubica Dokić, Nikola Maravić, Dragana Šoronja Simović, Cecilia Hodur, Ivana Nikolić, Biljana Pajin
Abstract:
New environmental regulations and the increasing market preference for companies that respect the ecosystem had encouraged the industry to look after new treatments for its effluents. The sugar industry, one of the largest emitter of environmental pollutants, follows this tendency. Membrane technology is convenient for separation of suspended solids, colloids and high molecular weight materials that are present in a wastewater from the sugar industry. The idea is to microfilter the wastewater, where the permeate passes through the membrane and becomes available for recycle and re-use in the sugar manufacturing process. For microfiltration of this effluent a tubular ceramic membrane was used with a pore size of 200 nm at transmembrane pressure in range of 1 – 3 bars and in range of flow rate of 50 – 150 l/h. Kenics static mixer was used for permeate flux enhancement. Turbidity and suspended solids were removed and the permeate flux was continuously monitored during the microfiltration process. The flux achieved after 90 minutes of microfiltration was in a range of 50-70 L/m2h. The obtained turbidity decrease was in the range of 50-99% and the total amount of suspended solids was removed.Keywords: ceramic membrane, microfiltration, permeate flux, sugar industry, wastewater
Procedia PDF Downloads 5169588 Grain Size Statistics and Depositional Pattern of the Ecca Group Sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa
Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicates the dominance of low energy environment. The bivariate plots that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function (LDF) analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are fluvial (deltaic) deposits. The graphic mean value shows the dominance of fine sand-size particles, which point to relatively low energy conditions of deposition. In addition, the LDF results point to low energy conditions during the deposition of the Prince Albert, Collingham and part of the Ripon Formation (Pluto Vale and Wonderfontein Shale Members), whereas the Trumpeters Member of the Ripon Formation and the overlying Fort Brown Formation accumulated under high energy conditions. The CM pattern shows a clustered distribution of sediments in the PQ and QR segments, indicating that the sediments were deposited mostly by suspension and rolling/saltation, and graded suspension. Furthermore, the plots also show that the sediments are mainly deposited by turbidity currents. Visher diagrams show the variability of hydraulic depositional conditions for the Permian Ecca Group sandstones. Saltation is the major process of transportation, although suspension and traction also played some role during deposition of the sediments. The sediments were mainly in saltation and suspension before being deposited.Keywords: grain size analysis, hydrodynamic condition, depositional environment, Ecca Group, South Africa
Procedia PDF Downloads 4739587 The Optimal Order Policy for the Newsvendor Model under Worker Learning
Authors: Sunantha Teyarachakul
Abstract:
We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.Keywords: inventory management, Newsvendor model, order policy, worker learning
Procedia PDF Downloads 4069586 Force Distribution and Muscles Activation for Ankle Instability Patients with Rigid and Kinesiotape while Standing
Authors: Norazlin Mohamad, Saiful Adli Bukry, Zarina Zahari, Haidzir Manaf, Hanafi Sawalludin
Abstract:
Background: Deficit in neuromuscular recruitment and decrease force distribution were the common problems among ankle instability patients due to altered joint kinematics that lead to recurrent ankle injuries. Rigid Tape and KT Tape had widely been used as therapeutic and performance enhancement tools in ankle stability. However the difference effect between this two tapes is still controversial. Objective: To investigate the different effect between Rigid Tape and KT Tape on force distribution and muscle activation among ankle instability patients while standing. Study design: Crossover trial. Participants: 27 patients, age between 18 to 30 years old participated in this study. All the subjects were applied with KT Tape & Rigid Tape on their affected ankle with 3 days of interval for each intervention. The subjects were tested with their barefoot (without tape) first to act as a baseline before proceeding with KT Tape, and then with Rigid Tape. Result: There were no significant difference on force distribution at forefoot and back-foot for both tapes while standing. However the mean data shows that Rigid Tape has the highest force distribution at back-foot rather than forefoot when compared with KT Tape that had more force distribution at forefoot while standing. Regarding muscle activation (Peroneus Longus), results showed significant difference between Rigid Tape and KT Tape (p= 0.048). However, there was no significant difference on Tibialis Anterior muscle activation between both tapes while standing. Conclusion: The results indicated that Peroneus longus muscle was more active when applied Rigid Tape rather than KT Tape in ankle instability patients while standing.Keywords: ankle instability, kinematic, muscle activation, force distribution, Rigid Tape, KT tape
Procedia PDF Downloads 4059585 Conservativeness of Probabilistic Constrained Optimal Control Method for Unknown Probability Distribution
Authors: Tomoaki Hashimoto
Abstract:
In recent decades, probabilistic constrained optimal control problems have attracted much attention in many research field. Although probabilistic constraints are generally intractable in an optimization problem, several tractable methods haven been proposed to handle probabilistic constraints. In most methods, probabilistic constraints are reduced to deterministic constraints that are tractable in an optimization problem. However, there is a gap between the transformed deterministic constraints in case of known and unknown probability distribution. This paper examines the conservativeness of probabilistic constrained optimization method with the unknown probability distribution. The objective of this paper is to provide a quantitative assessment of the conservatism for tractable constraints in probabilistic constrained optimization with the unknown probability distribution.Keywords: optimal control, stochastic systems, discrete time systems, probabilistic constraints
Procedia PDF Downloads 5689584 An Extended Inverse Pareto Distribution, with Applications
Authors: Abdel Hadi Ebraheim
Abstract:
This paper introduces a new extension of the Inverse Pareto distribution in the framework of Marshal-Olkin (1997) family of distributions. This model is capable of modeling various shapes of aging and failure data. The statistical properties of the new model are discussed. Several methods are used to estimate the parameters involved. Explicit expressions are derived for different types of moments of value in reliability analysis are obtained. Besides, the order statistics of samples from the new proposed model have been studied. Finally, the usefulness of the new model for modeling reliability data is illustrated using two real data sets with simulation study.Keywords: pareto distribution, marshal-Olkin, reliability, hazard functions, moments, estimation
Procedia PDF Downloads 759583 Evaluation on Effective Size and Hysteresis Characteristics of CHS Damper
Authors: Daniel Y. Abebe, Jaehyouk Choi
Abstract:
This study aims to evaluate the effective size and hysteresis characteristics of Circular Hollow Steel (CHS) damper. CHS damper is among steel dampers which are used widely for seismic energy dissipation because they are easy to install, maintain and are low cost. CHS damper dissipates seismic energy through metallic deformation due to the geometrical elasticity of circular shape and fatigue resistance around connection part. After calculating the effective size, which is found to be height to diameter ratio of √ ("3”), nonlinear FE analyses were conducted to evaluate the hysteresis characteristics. To verify the analysis simulation quasi static loading was carried out and the result was compared and satisfactory result was obtained.Keywords: SS400 steel, circular hollow steel damper, effective size, quasi static loading, FE analysis
Procedia PDF Downloads 4199582 A Comparative Study of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) and Extreme Value Theory (EVT) Model in Modeling Value-at-Risk (VaR)
Authors: Longqing Li
Abstract:
The paper addresses the inefficiency of the classical model in measuring the Value-at-Risk (VaR) using a normal distribution or a Student’s t distribution. Specifically, the paper focuses on the one day ahead Value-at-Risk (VaR) of major stock market’s daily returns in US, UK, China and Hong Kong in the most recent ten years under 95% confidence level. To improve the predictable power and search for the best performing model, the paper proposes using two leading alternatives, Extreme Value Theory (EVT) and a family of GARCH models, and compares the relative performance. The main contribution could be summarized in two aspects. First, the paper extends the GARCH family model by incorporating EGARCH and TGARCH to shed light on the difference between each in estimating one day ahead Value-at-Risk (VaR). Second, to account for the non-normality in the distribution of financial markets, the paper applies Generalized Error Distribution (GED), instead of the normal distribution, to govern the innovation term. A dynamic back-testing procedure is employed to assess the performance of each model, a family of GARCH and the conditional EVT. The conclusion is that Exponential GARCH yields the best estimate in out-of-sample one day ahead Value-at-Risk (VaR) forecasting. Moreover, the discrepancy of performance between the GARCH and the conditional EVT is indistinguishable.Keywords: Value-at-Risk, Extreme Value Theory, conditional EVT, backtesting
Procedia PDF Downloads 3149581 2D Monte Carlo Simulation of Grain Growth under Transient Conditions
Authors: K. R. Phaneesh, Anirudh Bhat, G. Mukherjee, K. T. Kashyap
Abstract:
Extensive Monte Carlo Potts model simulations were performed on 2D square lattice to investigate the effects of simulated higher temperatures effects on grain growth kinetics. A range of simulation temperatures (KTs) were applied on a matrix of size 10002 with Q-state 64, dispersed with a wide range of second phase particles, ranging from 0.001 to 0.1, and then run to 100,000 Monte Carlo steps. The average grain size, the largest grain size and the grain growth exponent were evaluated for all particle fractions and simulated temperatures. After evaluating several growth parameters, the critical temperature for a square lattice, with eight nearest neighbors, was found to be KTs = 0.4.Keywords: average grain size, critical temperature, grain growth exponent, Monte Carlo steps
Procedia PDF Downloads 516