Search results for: logistic distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5827

Search results for: logistic distribution

5347 Determining Best Fitting Distributions for Minimum Flows of Streams in Gediz Basin

Authors: Naci Büyükkaracığan

Abstract:

Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.

Keywords: Gediz Basin, goodness-of-fit tests, minimum flows, probability distribution

Procedia PDF Downloads 269
5346 Reversible Information Hitting in Encrypted JPEG Bitstream by LSB Based on Inherent Algorithm

Authors: Vaibhav Barve

Abstract:

Reversible information hiding has drawn a lot of interest as of late. Being reversible, we can restore unique computerized data totally. It is a plan where mystery data is put away in digital media like image, video, audio to maintain a strategic distance from unapproved access and security reason. By and large JPEG bit stream is utilized to store this key data, first JPEG bit stream is encrypted into all around sorted out structure and then this secret information or key data is implanted into this encrypted region by marginally changing the JPEG bit stream. Valuable pixels suitable for information implanting are computed and as indicated by this key subtle elements are implanted. In our proposed framework we are utilizing RC4 algorithm for encrypting JPEG bit stream. Encryption key is acknowledged by framework user which, likewise, will be used at the time of decryption. We are executing enhanced least significant bit supplanting steganography by utilizing genetic algorithm. At first, the quantity of bits that must be installed in a guaranteed coefficient is versatile. By utilizing proper parameters, we can get high capacity while ensuring high security. We are utilizing logistic map for shuffling of bits and utilization GA (Genetic Algorithm) to find right parameters for the logistic map. Information embedding key is utilized at the time of information embedding. By utilizing precise picture encryption and information embedding key, the beneficiary can, without much of a stretch, concentrate the incorporated secure data and totally recoup the first picture and also the original secret information. At the point when the embedding key is truant, the first picture can be recouped pretty nearly with sufficient quality without getting the embedding key of interest.

Keywords: data embedding, decryption, encryption, reversible data hiding, steganography

Procedia PDF Downloads 285
5345 Max-Entropy Feed-Forward Clustering Neural Network

Authors: Xiaohan Bookman, Xiaoyan Zhu

Abstract:

The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI data sets, comparing with a few baselines and applied purity as the measurement. The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.

Keywords: feed-forward neural network, clustering, max-entropy principle, probabilistic models

Procedia PDF Downloads 433
5344 Agroforestry Systems and Practices and Its Adoption in Kilombero Cluster of Sagcot, Tanzania

Authors: Lazaro E. Nnko, Japhet J. Kashaigili, Gerald C. Monela, Pantaleo K. T. Munishi

Abstract:

Agroforestry systems and practices are perceived to improve livelihood and sustainable management of natural resources. However, their adoption in various regions differs with the biophysical conditions and societal characteristics. This study was conducted in Kilombero District to investigate the factors influencing the adoption of different agroforestry systems and practices in agro-ecosystems and farming systems. A household survey, key informant interviews, and focus group discussion was used for data collection in three villages. Descriptive statistics and multinomial logistic regression in SPSS were applied for analysis. Results show that Igima and Ngajengwa villages had home garden practices dominated, as revealed by 63.3% and 66.7%, respectively, while Mbingu village had mixed intercropping practice with 56.67%. Agrosilvopasture systems were dominant in Igima and Ngajengwa villages with 56.7% and 66.7%, respectively, while in Mbingu village, the dominant system was agrosilviculture with 66.7%. The results from multinomial logistic regression show that different explanatory variable was statistical significance as predictors of the adoption of agroforestry systems and practices. Residence type and sex were the most dominant factor influencing the adoption of agroforestry systems. Duration of stay in the village, availability of extension education, residence, and sex were the dominant factor influencing the adoption of agroforestry practices. The most important and statistically significant factors among these were residence type and sex. The study concludes that agroforestry will be more successful if the local priorities, which include social-economic need characteristics of the society, will be considered in designing systems and practices. The socio-economic need of the community should be addressed in the process of expanding the adoption of agroforestry systems and practices.

Keywords: agroforestry adoption, agroforestry systems, agroforestry practices, agroforestry, Kilombero

Procedia PDF Downloads 111
5343 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter

Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar

Abstract:

Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).

Keywords: filter media, hydraulic loading rate, residence time distribution, tracer

Procedia PDF Downloads 272
5342 A Benchtop Experiment to Study Changes in Tracer Distribution in the Subarachnoid Space

Authors: Smruti Mahapatra, Dipankar Biswas, Richard Um, Michael Meggyesy, Riccardo Serra, Noah Gorelick, Steven Marra, Amir Manbachi, Mark G. Luciano

Abstract:

Intracranial pressure (ICP) is profoundly regulated by the effects of cardiac pulsation and the volume of the incoming blood. Furthermore, these effects on ICP are incremented by the presence of a rigid skull that does not allow for changes in total volume during the cardiac cycle. These factors play a pivotal role in cerebrospinal fluid (CSF) dynamics and distribution, with consequences that are not well understood to this date and that may have a deep effect on the Central Nervous System (CNS) functioning. We designed this study with two specific aims: (a) To study how pulsatility influences local CSF flow, and (b) To study how modulating intracranial pressure affects drug distribution throughout the SAS globally. In order to achieve these aims, we built an elaborate in-vitro model of the SAS closely mimicking the dimensions and flow rates of physiological systems. To modulate intracranial pressure, we used an intracranially implanted, cardiac-gated, volume-oscillating balloon (CADENCE device). Commercially available dye was used to visualize changes in CSF flow. We first implemented two control cases, seeing how the tracer behaves in the presence of pulsations from the brain phantom and the balloon individually. After establishing the controls, we tested 2 cases, having the brain and the balloon pulsate together in sync and out of sync. We then analyzed the distribution area using image processing software. The in-sync case produced a significant increase, 5x times, in the tracer distribution area relative to the out-of-sync case. Assuming that the tracer fluid would mimic blood flow movement, a drug introduced in the SAS with such a system in place would enhance drug distribution and increase the bioavailability of therapeutic drugs to a wider spectrum of brain tissue.

Keywords: blood-brain barrier, cardiac-gated, cerebrospinal fluid, drug delivery, neurosurgery

Procedia PDF Downloads 178
5341 Atomistic Insight into the System of Trapped Oil Droplet/ Nanofluid System in Nanochannels

Authors: Yuanhao Chang, Senbo Xiao, Zhiliang Zhang, Jianying He

Abstract:

The role of nanoparticles (NPs) in enhanced oil recovery (EOR) is being increasingly emphasized. In this study, the motion of NPs and local stress distribution of tapped oil droplet/nanofluid in nanochannels are studied with coarse-grained modeling and molecular dynamic simulations. The results illustrate three motion patterns for NPs: hydrophilic NPs are more likely to adsorb on the channel and stay near the three-phase contact areas, hydrophobic NPs move inside the oil droplet as clusters and more mixed NPs are trapped at the oil-water interface. NPs in each pattern affect the flow of fluid and the interfacial thickness to various degrees. Based on the calculation of atomistic stress, the characteristic that the higher value of stress occurs at the place where NPs aggregate can be obtained. Different occurrence patterns correspond to specific local stress distribution. Significantly, in the three-phase contact area for hydrophilic NPs, the local stress distribution close to the pattern of structural disjoining pressure is observed, which proves the existence of structural disjoining pressure in molecular dynamics simulation for the first time. Our results guide the design and screen of NPs for EOR and provide a basic understanding of nanofluid applications.

Keywords: local stress distribution, nanoparticles, enhanced oil recovery, molecular dynamics simulation, trapped oil droplet, structural disjoining pressure

Procedia PDF Downloads 132
5340 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 172
5339 Distribution System Planning with Distributed Generation and Capacitor Placements

Authors: Nattachote Rugthaicharoencheep

Abstract:

This paper presents a feeder reconfiguration problem in distribution systems. The objective is to minimize the system power loss and to improve bus voltage profile. The optimization problem is subjected to system constraints consisting of load-point voltage limits, radial configuration format, no load-point interruption, and feeder capability limits. A method based on genetic algorithm, a search algorithm based on the mechanics of natural selection and natural genetics, is proposed to determine the optimal pattern of configuration. The developed methodology is demonstrated by a 33-bus radial distribution system with distributed generations and feeder capacitors. The study results show that the optimal on/off patterns of the switches can be identified to give the minimum power loss while respecting all the constraints.

Keywords: network reconfiguration, distributed generation capacitor placement, loss reduction, genetic algorithm

Procedia PDF Downloads 172
5338 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation

Procedia PDF Downloads 431
5337 Invasive Ranges of Gorse (Ulex europaeus) in South Australia and Sri Lanka Using Species Distribution Modelling

Authors: Champika S. Kariyawasam

Abstract:

The distribution of gorse (Ulex europaeus) plants in South Australia has been modelled using 126 presence-only location data as a function of seven climate parameters. The predicted range of U. europaeus is mainly along the Mount Lofty Ranges in the Adelaide Hills and on Kangaroo Island. Annual precipitation and yearly average aridity index appeared to be the highest contributing variables to the final model formulation. The Jackknife procedure was employed to identify the contribution of different variables to gorse model outputs and response curves were used to predict changes with changing environmental variables. Based on this analysis, it was revealed that the combined effect of one or more variables could make a completely different impact to the original variables on their own to the model prediction. This work also demonstrates the need for a careful approach when selecting environmental variables for projecting correlative models to climatically distinct area. Maxent acts as a robust model when projecting the fitted species distribution model to another area with changing climatic conditions, whereas the generalized linear model, bioclim, and domain models to be less robust in this regard. These findings are important not only for predicting and managing invasive alien gorse in South Australia and Sri Lanka but also in other countries of the invasive range.

Keywords: invasive species, Maxent, species distribution modelling, Ulex europaeus

Procedia PDF Downloads 129
5336 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column

Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan

Abstract:

Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.

Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill

Procedia PDF Downloads 73
5335 Cigarette Smoking and Alcohol Use among Mauritian Adolescents: Analysis of 2017 WHO Global School-Based Student Health Survey

Authors: Iyanujesu Adereti, Tajudeen Basiru, Ayodamola Olanipekun

Abstract:

Background: Substance abuse among adolescents is of public health concern globally. Despite being the most abused by adolescents, there are limited studies on the prevalence of alcohol use and cigarette smoking among adolescents in Mauritius. Objectives: To determine the prevalence of cigarette smoking, alcohol use and associated correlates among school-going adolescents in Mauritius. Methodology: Data obtained from 2017 WHO Global School-based Student Health Survey (GSHS) survey of 3,012 school-going adolescents in Mauritius was analyzed using STATA. Descriptive statistics were used to obtain prevalence. Bivariate and multivariate logistic regression analysis was used to evaluate predictors of cigarette smoking and alcohol use. Results: Prevalence of alcohol consumption and cigarette smoking were 26.0% and 17.1%, respectively. Smoking and alcohol use was more prevalent among males, younger adolescents, and those in higher school grades (p-value <.000). In multivariable logistic regression, male gender was associated with a higher risk of cigarette smoking (adjusted Odds Ratio (aOR) [95%Confidence Interval (CI)]= 1.51[1.06-2.14]) but lower risk of alcohol use (aOR[95%CI]= 0.69[0.53-0.90]) while older age (mid and late adolescence) and parental smoking were found to be associated with increased risk of alcohol use (aOR[95%CI]= 1.94[1.34-2.99] and 1.36[1.05-1.78] respectively). Marijuana use, truancy, being in a fight and suicide ideation were associated with increased odds of alcohol use (aOR[95%CI]= 3.82[3.39-6.09]; 2.15[1.62-2.87]; 1.83[1.34-2.49] and 1.93[1.38-2.69] respectively) and cigarette smoking (aOR[95%CI]= 17.28[10.4 - 28.51]; 1.73[1.21-2. 49]; 1.67[1.14-2.45] and 2.17[1.43-3.28] respectively) while involvement in sexual activity was associated with reduced risk of alcohol use (aOR[95%CI]= 0.50[0.37-0.68]) and cigarette smoking (aOR[95%CI]= 0.47[0.33-0.69]). Parental support and parental monitoring were uniquely associated with lower risk of cigarette smoking (aOR[95%CI]= 0.69[0.47-0.99] and 0.62[0.43-0.91] respectively). Conclusion: The high prevalence of alcohol use and cigarette smoking in this study shows the need for the government of Mauritius to enhance policies that will help address this issue putting into accounts the various risk and protective factors.

Keywords: adolescent health, alcohol use, cigarette smoking, global school-based student health survey

Procedia PDF Downloads 244
5334 Atlantic Sailfish (Istiophorus albicans) Distribution off the East Coast of Florida from 2003 to 2018 in Response to Sea Surface Temperature

Authors: Meredith M. Pratt

Abstract:

The Atlantic sailfish (Istiophorus albicans) ranges from 40°N to 40°S in the Western Atlantic Ocean and has great economic and recreational value for sport fishers. Off the eastern coast of Florida, charter boats often target this species. Stuart, Florida, bills itself as the sailfish capital of the world. Sailfish tag data from The Billfish Foundation and NOAA was used to determine the relationship between sea surface temperature (SST) and the distribution of Atlantic sailfish caught and released over a fifteen-year period (2003 to 2018). Tagging information was collected from local sports fishermen in Florida. Using the time and location of each landed sailfish, a satellite-derived SST value was obtained for each point. The purpose of this study was to determine if sea surface warming was associated with changes in sailfish distribution. On average, sailfish were caught at 26.16 ± 1.70°C (x̄ ± s.d.) over the fifteen-year period. The most sailfish catches occurred at temperatures ranging from 25.2°C to 25.5°C. Over the fifteen-year period, sailfish catches decreased at lower temperatures (~23°C and ~24°C) and at 31°C. At ~25°C and ~30°C there was no change in catch numbers of sailfish. From 26°C to 29°C, there was an increase in the number of sailfish. Based on these results, increasing ocean temperatures will have an impact on the distribution and habitat utilization of sailfish. Warming sea surface temperatures create a need for more policy and regulation to protect the Atlantic sailfish and related highly migratory billfish species.

Keywords: atlantic sailfish, Billfish, istiophorus albicans, sea surface temperature

Procedia PDF Downloads 138
5333 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda

Authors: Rutebuka Evariste, Zhang Lixiao

Abstract:

Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.

Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics

Procedia PDF Downloads 340
5332 Spatial Differentiation of Elderly Care Facilities in Mountainous Cities: A Case Study of Chongqing

Authors: Xuan Zhao, Wen Jiang

Abstract:

In this study, a web crawler was used to collect POI sample data from 38 districts and counties of Chongqing in 2022, and ArcGIS was combined to coordinate and projection conversion and realize data visualization. Nuclear density analysis and spatial correlation analysis were used to explore the spatial distribution characteristics of elderly care facilities in Chongqing, and K mean cluster analysis was carried out with GeoDa to study the spatial concentration degree of elderly care resources in 38 districts and counties. Finally, the driving force of spatial differentiation of elderly care facilities in various districts and counties of Chongqing is studied by using the method of geographic detector. The results show that: (1) in terms of spatial distribution structure, the distribution of elderly care facilities in Chongqing is unbalanced, showing a distribution pattern of ‘large dispersion and small agglomeration’ and the asymmetric pattern of ‘west dense and east sparse, north dense and south sparse’ is prominent. (2) In terms of the spatial matching between elderly care resources and the elderly population, there is a weak coordination between the input of elderly care resources and the distribution of the elderly population at the county level in Chongqing. (3) The analysis of the results of the geographical detector shows that the single factor influence is mainly the number of elderly population, public financial revenue and district and county GDP. The high single factor influence is mainly caused by the elderly population, public financial income, and district and county GDP. The influence of each influence factor on the spatial distribution of elderly care facilities is not simply superimposed but has a nonlinear enhancement effect or double factor enhancement. It is necessary to strengthen the synergistic effect of two factors and promote the synergistic effect of multiple factors.

Keywords: aging, elderly care facilities, spatial differentiation, geographical detector, driving force analysis, Mountain city

Procedia PDF Downloads 34
5331 Relationship and Associated Factors of Breastfeeding Self-efficacy among Postpartum Couples in Malawi: A Cross-sectional Study

Authors: Roselyn Chipojola, Shu-yu Kuo

Abstract:

Background: Breastfeeding self-efficacy in both mothers and fathers play a crucial role in improving exclusive breastfeeding rates. However, less is known on the relationship and predictors of paternal and maternal breastfeeding self-efficacy. This study aimed to examine the relationship and associated factors of breastfeeding self-efficacy (BSE) among mothers and fathers in Malawi. Methods: A cross-sectional study was conducted on 180 pairs of postpartum mothers and fathers at a tertiary maternity facility in central Malawi. BSE was measured using the Breastfeeding Self-Efficacy Scale Short-Form. Depressive symptoms were assessed by the Edinburgh Postnatal Depression Scale. A structured questionnaire was used to collect demographic and health variables. Data were analyzed using multivariable logistic regression and multinomial logistic regression. Results: A higher score of self-efficacy was found in mothers (mean=55.7, Standard Deviation (SD) =6.5) compared to fathers (mean=50.2, SD=11.9). A significant association between paternal and maternal breastfeeding self-efficacy was found (r= 0. 32). Age, employment status, mode of birth was significantly related to maternal and paternal BSE, respectively. Older age and caesarean section delivery were significant factors of combined BSE scores in couples. A higher BSE score in either the mother or her partner predicted higher exclusive breastfeeding rates. BSE scores were lower when couples’ depressive symptoms were high. Conclusion: BSE are highly correlated between Malawian mothers and fathers, with a relatively higher score in maternal BSE. Importantly, a high BSE in couples predicted higher odds of exclusive breastfeeding, which highlights the need to include both mothers and fathers in future breastfeeding promotion strategies.

Keywords: paternal, maternal, exclusive breastfeeding, breastfeeding self‑efficacy, malawi

Procedia PDF Downloads 65
5330 An Analysis of Human Resource Management Policies for Constructing Employer Brands in the Logistics Sector

Authors: Müberra Yüksel, Ömer Faruk Görçün

Abstract:

The purpose of the present study is to investigate the role of strategic human resource management (SHRM) in constructing "employer branding" in logistics. Prior research does not focus on internal stakeholders, that is, employees. Despite the fact that logistic sector has become customer-oriented, the focus is solely on service quality as the unique aspect of logistic companies for competitive advantage. With an increasing interest lately in internal marketing of the employer brand, the emphasis is on the value that human capital brings to the firm which cannot be imitated. `Employer branding` has been the application of branding and relationship marketing principles for competitive advantage in SHRM. Employer branding is an organizing framework for human resource managers since it represents an organization’s efforts to promote, both within and outside, a coherent view of what makes the firm different and desirable as an employer, i.e., the distinct “employer brand personality” and "employee value propositions" (EVP) offered. The presumption of employer branding enhanced by internal marketing is to make customer-conscious employees to handle services better by being aligned with business mission and goals. Starting from internal customers and analyzing the gaps of EVP by using analytical hierarchy process methodology (AHP) and inquiring whether these brand values are communicated and conceived well may be the initial steps in our proposal for employer branding in logistics sector. This empirical study aims to fill this research gap within the context of an emergent market- Turkey, which is located at a hub of transportation and logistics.

Keywords: Strategic Human Resource Management (SHRM), employer branding, Employee Value Propositions (EVP), Analytical Hierarchy Process (AHP), logistics

Procedia PDF Downloads 339
5329 Estimation of Particle Size Distribution Using Magnetization Data

Authors: Navneet Kaur, S. D. Tiwari

Abstract:

Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.

Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism

Procedia PDF Downloads 139
5328 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis

Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn

Abstract:

Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.

Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics

Procedia PDF Downloads 157
5327 Association between Severe Acidemia before Endotracheal Intubation and the Lower First Attempt Intubation Success Rate

Authors: Keiko Naito, Y. Nakashima, S. Yamauchi, Y. Kunitani, Y. Ishigami, K. Numata, M. Mizobe, Y. Homma, J. Takahashi, T. Inoue, T. Shiga, H. Funakoshi

Abstract:

Background: A presence of severe acidemia, defined as pH < 7.2, is common during endotracheal intubation for critically ill patients in the emergency department (ED). Severe acidemia is widely recognized as a predisposing factor for intubation failure. However, it is unclear that acidemic condition itself actually makes endotracheal intubation more difficult. We aimed to evaluate if a presence of severe acidemia before intubation is associated with the lower first attempt intubation success rate in the ED. Methods: This is a retrospective observational cohort study in the ED of an urban hospital in Japan. The collected data included patient demographics, such as age, sex, and body mass index, presence of one or more factors of modified LEMON criteria for predicting difficult intubation, reasons for intubation, blood gas levels, airway equipment, intubation by emergency physician or not, and the use of the rapid sequence intubation technique. Those with any of the following were excluded from the analysis: (1) no blood gas drawn before intubation, (2) cardiopulmonary arrest, and (3) under 18 years of age. The primary outcome was the first attempt intubation success rates between a severe acidemic patients (SA) group and a non-severe acidemic patients (NA) group. Logistic regression analysis was used to test the first attempt success rates for intubations between those two groups. Results: Over 5 years, a total of 486 intubations were performed; 105 in the SA group and 381 in the NA group. The univariate analysis showed that the first attempt intubation success rate was lower in the SA group than in the NA group (71.4% vs 83.5%, p < 0.01). The multivariate logistic regression analysis identified that severe acidemia was significantly associated with the first attempt intubation failure (OR 1.9, 95% CI 1.03-3.68, p = 0.04). Conclusions: A presence of severe acidemia before endotracheal intubation lowers the first attempt intubation success rate in the ED.

Keywords: acidemia, airway management, endotracheal intubation, first-attempt intubation success rate

Procedia PDF Downloads 243
5326 Functioning of Public Distribution System and Calories Intake in the State of Maharashtra

Authors: Balasaheb Bansode, L. Ladusingh

Abstract:

The public distribution system is an important component of food security. It is a massive welfare program undertaken by Government of India and implemented by state government since India being a federal state; for achieving multiple objectives like eliminating hunger, reduction in malnutrition and making food consumption affordable. This program reaches at the community level through the various agencies of the government. The paper focuses on the accessibility of PDS at household level and how the present policy framework results in exclusion and inclusion errors. It tries to explore the sanctioned food grain quantity received by differentiated ration cards according to income criterion at household level, and also it has highlighted on the type of corruption in food distribution that is generated by the PDS system. The data used is of secondary nature from NSSO 68 round conducted in 2012. Bivariate and multivariate techniques have been used to understand the working and consumption of food for this paper.

Keywords: calories intake, entitle food quantity, poverty aliviation through PDS, target error

Procedia PDF Downloads 327
5325 Nonparametric Copula Approximations

Authors: Serge Provost, Yishan Zang

Abstract:

Copulas are currently utilized in finance, reliability theory, machine learning, signal processing, geodesy, hydrology and biostatistics, among several other fields of scientific investigation. It follows from Sklar's theorem that the joint distribution function of a multidimensional random vector can be expressed in terms of its associated copula and marginals. Since marginal distributions can easily be determined by making use of a variety of techniques, we address the problem of securing the distribution of the copula. This will be done by using several approaches. For example, we will obtain bivariate least-squares approximations of the empirical copulas, modify the kernel density estimation technique and propose a criterion for selecting appropriate bandwidths, differentiate linearized empirical copulas, secure Bernstein polynomial approximations of suitable degrees, and apply a corollary to Sklar's result. Illustrative examples involving actual observations will be presented. The proposed methodologies will as well be applied to a sample generated from a known copula distribution in order to validate their effectiveness.

Keywords: copulas, Bernstein polynomial approximation, least-squares polynomial approximation, kernel density estimation, density approximation

Procedia PDF Downloads 67
5324 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 131
5323 Influence of the Non-Uniform Distribution of Filler Porosity on the Thermal Performance of Sensible Heat Thermocline Storage Tanks

Authors: Yuchao Hua, Lingai Luo

Abstract:

Thermal energy storage is of critical importance for the highly-efficient utilization of renewable energy sources. Over the past decades, single-tank thermocline technology has attracted much attention owing to its high cost-effectiveness. In the present work, we investigate the influence of the filler porosity’s non-uniform distribution on the thermal performance of the packed-bed sensible heat thermocline storage tanks on the basis of the analytical model obtained by the Laplace transform. It is found that when the total amount of filler materials (i.e., the integration of porosity) is fixed, the different porosity distributions can result in the significantly-different behaviors of outlet temperature and thus the varied charging and discharging efficiencies. Our results indicate that a non-uniform distribution of the fillers with the proper design can improve the heat storage performance without changing the total amount of the filling materials.

Keywords: energy storage, heat thermocline storage tank, packed bed, transient thermal analysis

Procedia PDF Downloads 88
5322 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms

Authors: Bliss Singhal

Abstract:

Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.

Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression

Procedia PDF Downloads 78
5321 Numerical Investigation of Fluid Flow and Temperature Distribution on Power Transformer Windings Using Open Foam

Authors: Saeed Khandan Siar, Stefan Tenbohlen, Christian Breuer, Raphael Lebreton

Abstract:

The goal of this article is to investigate the detailed temperature distribution and the fluid flow of an oil cooled winding of a power transformer by means of computational fluid dynamics (CFD). The experimental setup consists of three passes of a zig-zag cooled disc type winding, in which losses are modeled by heating cartridges in each winding segment. A precise temperature sensor measures the temperature of each turn. The laboratory setup allows the exact control of the boundary conditions, e.g. the oil flow rate and the inlet temperature. Furthermore, a simulation model is solved using the open source computational fluid dynamics solver OpenFOAM and validated with the experimental results. The model utilizes the laminar and turbulent flow for the different mass flow rate of the oil. The good agreement of the simulation results with experimental measurements validates the model.

Keywords: CFD, conjugated heat transfer, power transformers, temperature distribution

Procedia PDF Downloads 415
5320 The Effect of Different Parameters on a Single Invariant Lateral Displacement Distribution to Consider the Higher Modes Effect in a Displacement-Based Pushover Procedure

Authors: Mohamad Amin Amini, Mehdi Poursha

Abstract:

Nonlinear response history analysis (NL-RHA) is a robust analytical tool for estimating the seismic demands of structures responding in the inelastic range. However, because of its conceptual and numerical complications, the nonlinear static procedure (NSP) is being increasingly used as a suitable tool for seismic performance evaluation of structures. The conventional pushover analysis methods presented in various codes (FEMA 356; Eurocode-8; ATC-40), are limited to the first-mode-dominated structures, and cannot take higher modes effect into consideration. Therefore, since more than a decade ago, researchers developed enhanced pushover analysis procedures to take higher modes effect into account. The main objective of this study is to propose an enhanced invariant lateral displacement distribution to take higher modes effect into consideration in performing a displacement-based pushover analysis, whereby a set of laterally applied displacements, rather than forces, is monotonically applied to the structure. For this purpose, the effect of different parameters such as the spectral displacement of ground motion, the modal participation factor, and the effective modal participating mass ratio on the lateral displacement distribution is investigated to find the best distribution. The major simplification of this procedure is that the effect of higher modes is concentrated into a single invariant lateral load distribution. Therefore, only one pushover analysis is sufficient without any need to utilize a modal combination rule for combining the responses. The invariant lateral displacement distribution for pushover analysis is then calculated by combining the modal story displacements using the modal combination rules. The seismic demands resulting from the different procedures are compared to those from the more accurate nonlinear response history analysis (NL-RHA) as a benchmark solution. Two structures of different heights including 10 and 20-story special steel moment resisting frames (MRFs) were selected and evaluated. Twenty ground motion records were used to conduct the NL-RHA. The results show that more accurate responses can be obtained in comparison with the conventional lateral loads when the enhanced modal lateral displacement distributions are used.

Keywords: displacement-based pushover, enhanced lateral load distribution, higher modes effect, nonlinear response history analysis (NL-RHA)

Procedia PDF Downloads 270
5319 Numerical Approach of RC Structural MembersExposed to Fire and After-Cooling Analysis

Authors: Ju-young Hwang, Hyo-Gyoung Kwak, Hong Jae Yim

Abstract:

This paper introduces a numerical analysis method for reinforced-concrete (RC) structures exposed to fire and compares the result with experimental results. The proposed analysis method for RC structure under the high temperature consists of two procedures. First step is to decide the temperature distribution across the section through the heat transfer analysis by using the time-temperature curve. After determination of the temperature distribution, the nonlinear analysis is followed. By considering material and geometrical non-linearity with the temperature distribution, nonlinear analysis predicts the behavior of RC structure under the fire by the exposed time. The proposed method is validated by the comparison with the experimental results. Finally, Prediction model to describe the status of after-cooling concrete can also be introduced based on the results of additional experiment. The product of this study is expected to be embedded for smart structure monitoring system against fire in u-City.

Keywords: RC structures, heat transfer analysis, nonlinear analysis, after-cooling concrete model

Procedia PDF Downloads 363
5318 Loss Allocation in Radial Distribution Networks for Loads of Composite Types

Authors: Sumit Banerjee, Chandan Kumar Chanda

Abstract:

The paper presents allocation of active power losses and energy losses to consumers connected to radial distribution networks in a deregulated environment for loads of composite types. A detailed comparison among four algorithms, namely quadratic loss allocation, proportional loss allocation, pro rata loss allocation and exact loss allocation methods are presented. Quadratic and proportional loss allocations are based on identifying the active and reactive components of current in each branch and the losses are allocated to each consumer, pro rata loss allocation method is based on the load demand of each consumer and exact loss allocation method is based on the actual contribution of active power loss by each consumer. The effectiveness of the proposed comparison among four algorithms for composite load is demonstrated through an example.

Keywords: composite type, deregulation, loss allocation, radial distribution networks

Procedia PDF Downloads 282