Search results for: assumptions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 504

Search results for: assumptions

234 Associated Map and Inter-Purchase Time Model for Multiple-Category Products

Authors: Ching-I Chen

Abstract:

The continued rise of e-commerce is the main driver of the rapid growth of global online purchase. Consumers can nearly buy everything they want at one occasion through online shopping. The purchase behavior models which focus on single product category are insufficient to describe online shopping behavior. Therefore, analysis of multi-category purchase gets more and more popular. For example, market basket analysis explores customers’ buying tendency of the association between product categories. The information derived from market basket analysis facilitates to make cross-selling strategies and product recommendation system. To detect the association between different product categories, we use the market basket analysis with the multidimensional scaling technique to build an associated map which describes how likely multiple product categories are bought at the same time. Besides, we also build an inter-purchase time model for associated products to describe how likely a product will be bought after its associated product is bought. We classify inter-purchase time behaviors of multi-category products into nine types, and use a mixture regression model to integrate those behaviors under our assumptions of purchase sequences. Our sample data is from comScore which provides a panelist-label database that captures detailed browsing and buying behavior of internet users across the United States. Finding the inter-purchase time from books to movie is shorter than the inter-purchase time from movies to books. According to the model analysis and empirical results, this research finally proposes the applications and recommendations in the management.

Keywords: multiple-category purchase behavior, inter-purchase time, market basket analysis, e-commerce

Procedia PDF Downloads 350
233 Advantages of Multispectral Imaging for Accurate Gas Temperature Profile Retrieval from Fire Combustion Reactions

Authors: Jean-Philippe Gagnon, Benjamin Saute, Stéphane Boubanga-Tombet

Abstract:

Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. However, it is well known that most combustion gases such as carbon dioxide (CO₂), water vapor (H₂O), and carbon monoxide (CO) selectively absorb/emit infrared radiation at discrete energies, i.e., over a very narrow spectral range. Therefore, temperature profiles of most combustion processes derived from conventional broadband imaging are inaccurate without prior knowledge or assumptions about the spectral emissivity properties of the combustion gases. Using spectral filters allows estimating these critical emissivity parameters in addition to providing selectivity regarding the chemical nature of the combustion gases. However, due to the turbulent nature of most flames, it is crucial that such information be obtained without sacrificing temporal resolution. For this reason, Telops has developed a time-resolved multispectral imaging system which combines a high-performance broadband camera synchronized with a rotating spectral filter wheel. In order to illustrate the benefits of using this system to characterize combustion experiments, measurements were carried out using a Telops MS-IR MW on a very simple combustion system: a wood fire. The temperature profiles calculated using the spectral information from the different channels were compared with corresponding temperature profiles obtained with conventional broadband imaging. The results illustrate the benefits of the Telops MS-IR cameras for the characterization of laminar and turbulent combustion systems at a high temporal resolution.

Keywords: infrared, multispectral, fire, broadband, gas temperature, IR camera

Procedia PDF Downloads 111
232 Ten Patterns of Organizational Misconduct and a Descriptive Model of Interactions

Authors: Ali Abbas

Abstract:

This paper presents a descriptive model of organizational misconduct based on observed patterns that occur before and after an ethical collapse. The patterns were classified by categorizing media articles in both "for-profit" and "not-for-profit" organizations. Based on the model parameters, the paper provides a descriptive model of various organizational deflection strategies under numerous scenarios, including situations where ethical complaints build-up, situations under which whistleblowers become more prevalent, situations where large scandals that relate to leadership occur, and strategies by which organizations deflect blame when pressure builds up or when media finds out. The model parameters start with the premise of a tolerance to double standards in unethical acts when conducted by leadership or by members of corporate governance. Following this premise, the model explains how organizations engage in discursive strategies to cover up the potential conflicts that arise, including secret agreements and weakening stakeholders who may oppose the organizational acts. Deflection strategies include "preemptive" and "post-complaint" secret agreements, absence of (or vague) documented procedures, engaging in blame and scapegoating, remaining silent on complaints until the media finds out, as well as being slow (if at all) to acknowledge misconduct and fast to cover it up. The results of this paper may be used to guide organizational leaders into the implications of such shortsighted strategies toward unethical acts, even if they are deemed legal. Validation of the model assumptions through numerous media articles is provided.

Keywords: ethical decision making, prediction, scandals, organizational strategies

Procedia PDF Downloads 101
231 Comparison of the Factor of Safety and Strength Reduction Factor Values from Slope Stability Analysis of a Large Open Pit

Authors: James Killian, Sarah Cox

Abstract:

The use of stability criteria within geotechnical engineering is the way the results of analyses are conveyed, and sensitivities and risk assessments are performed. Historically, the primary stability criteria for slope design has been the Factor of Safety (FOS) coming from a limit calculation. Increasingly, the value derived from Strength Reduction Factor (SRF) analysis is being used as the criteria for stability analysis. The purpose of this work was to study in detail the relationship between SRF values produced from a numerical modeling technique and the traditional FOS values produced from Limit Equilibrium (LEM) analyses. This study utilized a model of a 3000-foot-high slope with a 45-degree slope angle, assuming a perfectly plastic mohr-coulomb constitutive model with high cohesion and friction angle values typical of a large hard rock mine slope. A number of variables affecting the values of the SRF in a numerical analysis were tested, including zone size, in-situ stress, tensile strength, and dilation angle. This paper demonstrates that in most cases, SRF values are lower than the corresponding LEM FOS values. Modeled zone size has the greatest effect on the estimated SRF value, which can vary as much as 15% to the downside compared to FOS. For consistency when using SRF as a stability criteria, the authors suggest that numerical model zone sizes should not be constructed to be smaller than about 1% of the overall problem slope height and shouldn’t be greater than 2%. Future work could include investigations of the effect of anisotropic strength assumptions or advanced constitutive models.

Keywords: FOS, SRF, LEM, comparison

Procedia PDF Downloads 275
230 LanE-change Path Planning of Autonomous Driving Using Model-Based Optimization, Deep Reinforcement Learning and 5G Vehicle-to-Vehicle Communications

Authors: William Li

Abstract:

Lane-change path planning is a crucial and yet complex task in autonomous driving. The traditional path planning approach based on a system of carefully-crafted rules to cover various driving scenarios becomes unwieldy as more and more rules are added to deal with exceptions and corner cases. This paper proposes to divide the entire path planning to two stages. In the first stage the ego vehicle travels longitudinally in the source lane to reach a safe state. In the second stage the ego vehicle makes lateral lane-change maneuver to the target lane. The paper derives the safe state conditions based on lateral lane-change maneuver calculation to ensure collision free in the second stage. To determine the acceleration sequence that minimizes the time to reach a safe state in the first stage, the paper proposes three schemes, namely, kinetic model based optimization, deep reinforcement learning, and 5G vehicle-to-vehicle (V2V) communications. The paper investigates these schemes via simulation. The model-based optimization is sensitive to the model assumptions. The deep reinforcement learning is more flexible in handling scenarios beyond the model assumed by the optimization. The 5G V2V eliminates uncertainty in predicting future behaviors of surrounding vehicles by sharing driving intents and enabling cooperative driving.

Keywords: lane change, path planning, autonomous driving, deep reinforcement learning, 5G, V2V communications, connected vehicles

Procedia PDF Downloads 187
229 Optimizing the Morphology and Flow Patterns of Scaffold Perfusion Systems for Effective Cell Deposition Using Computational Fluid Dynamics

Authors: Vineeth Siripuram, Abhineet Nigam

Abstract:

A bioreactor is an engineered system that supports a biologically active environment. Along the years, the advancements in bioreactors have been widely accepted all over the world for varied applications ranging from sewage treatment to tissue cloning. Driven by tissue and organ shortage, tissue engineering has emerged as an alternative to transplantation for the reconstruction of lost or damaged organs. In this study, Computational fluid dynamics (CFD) has been used to model porous medium flow in scaffolds (taken from the literature) with different flow patterns. A detailed analysis of different scaffold geometries and their influence on cell deposition in the perfusion system is been carried out using Computational fluid dynamics (CFD). Considering the fact that, the scaffold should mimic the organs or tissues structures in a three-dimensional manner, certain assumptions were made accordingly. The research on scaffolds has been extensively carried out in different bioreactors. However, there has been less focus on the morphology of the scaffolds and the flow patterns in which the perfusion system is laid upon. The objective of this paper is to employ a computational approach using CFD simulation to determine the optimal morphology and the anisotropic measurements of the various samples of scaffolds. Using predictive computational modelling approach, variables which exert dominant effects on the cell deposition within the scaffold were prioritised and corresponding changes in morphology of scaffold and flow patterns in the perfusion systems are made. A Eulerian approach was carried on in multiple CFD simulations, and it is observed that the morphological and topological changes in the scaffold perfusion system are of great importance in the commercial applications of scaffolds.

Keywords: cell seeding, CFD, flow patterns, modelling, perfusion systems, scaffold

Procedia PDF Downloads 138
228 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad Daba, Jean-Pierre Dubois

Abstract:

Multi path fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper have utilized a Poisson modulated and weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multi-diversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent specular Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: cellular communication, femto and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process

Procedia PDF Downloads 428
227 Analysis of Shallow Foundation Using Conventional and Finite Element Approach

Authors: Sultan Al Shafian, Mozaher Ul Kabir, Khondoker Istiak Ahmad, Masnun Abrar, Mahfuza Khanum, Hossain M. Shahin

Abstract:

For structural evaluation of shallow foundation, the modulus of subgrade reaction is one of the most widely used and accepted parameter for its ease of calculations. To determine this parameter, one of the most common field method is Plate Load test method. In this field test method, the subgrade modulus is considered for a specific location and according to its application, it is assumed that the displacement occurred in one place does not affect other adjacent locations. For this kind of assumptions, the modulus of subgrade reaction sometimes forced the engineers to overdesign the underground structure, which eventually results in increasing the cost of the construction and sometimes failure of the structure. In the present study, the settlement of a shallow foundation has been analyzed using both conventional and numerical analysis. Around 25 plate load tests were conducted on a sand fill site in Bangladesh to determine the Modulus of Subgrade reaction of ground which is later used to design a shallow foundation considering different depth. After the collection of the field data, the field condition was appropriately simulated in a finite element software. Finally results obtained from both the conventional and numerical approach has been compared. A significant difference has been observed in the case of settlement while comparing the results. A proper correlation has also been proposed at the end of this research work between the two methods of in order to provide the most efficient way to calculate the subgrade modulus of the ground for designing the shallow foundation.

Keywords: modulus of subgrade reaction, shallow foundation, finite element analysis, settlement, plate load test

Procedia PDF Downloads 165
226 Performance Evaluation of a Small Microturbine Cogeneration Functional Model

Authors: Jeni A. Popescu, Sorin G. Tomescu, Valeriu A. Vilag

Abstract:

The paper focuses on the potential methods of increasing the performance of a microturbine by combining additional elements available for utilization in a cogeneration plant. The activity is carried out within the framework of a project aiming to develop, manufacture and test a microturbine functional model with high potential in energetic industry utilization. The main goal of the analysis is to determine the parameters of the fluid flow passing through each section of the turbine, based on limited data available in literature for the focus output power range or provided by experimental studies, starting from a reference cycle, and considering different cycle options, including simple, intercooled and recuperated options, in order to optimize a small cogeneration plant operation. The studied configurations operate under the same initial thermodynamic conditions and are based on a series of assumptions, in terms of individual performance of the components, pressure/velocity losses, compression ratios, and efficiencies. The thermodynamic analysis evaluates the expected performance of the microturbine cycle, while providing a series of input data and limitations to be included in the development of the experimental plan. To simplify the calculations and to allow a clear estimation of the effect of heat transfer between fluids, the working fluid for all the thermodynamic evolutions is, initially, air, the combustion being modelled by simple heat addition to the system. The theoretical results, along with preliminary experimental results are presented, aiming for a correlation in terms of microturbine performance.

Keywords: cogeneration, microturbine, performance, thermodynamic analysis

Procedia PDF Downloads 146
225 Exact Vibration Analysis of a Rectangular Nano-Plate Using Nonlocal Modified Sinusoidal Shear Deformation Theory

Authors: Korosh Khorshidi, Mohammad Khodadadi

Abstract:

In this paper, exact close form solution for out of plate free flexural vibration of moderately thick rectangular nanoplates are presented based on nonlocal modified trigonometric shear deformation theory, with assumptions of the Levy's type boundary conditions, for the first time. The aim of this study is to evaluate the effect of small-scale parameters on the frequency parameters of the moderately thick rectangular nano-plates. To describe the effects of small-scale parameters on vibrations of rectangular nanoplates, the Eringen theory is used. The Levy's type boundary conditions are combination of six different boundary conditions; specifically, two opposite edges are simply supported and any of the other two edges can be simply supported, clamped or free. Governing equations of motion and boundary conditions of the plate are derived by using the Hamilton’s principle. The present analytical solution can be obtained with any required accuracy and can be used as benchmark. Numerical results are presented to illustrate the effectiveness of the proposed method compared to other methods reported in the literature. Finally, the effect of boundary conditions, aspect ratios, small scale parameter and thickness ratios on nondimensional natural frequency parameters and frequency ratios are examined and discussed in detail.

Keywords: exact solution, nonlocal modified sinusoidal shear deformation theory, out of plane vibration, moderately thick rectangular plate

Procedia PDF Downloads 356
224 Performance Optimization on Waiting Time Using Queuing Theory in an Advanced Manufacturing Environment: Robotics to Enhance Productivity

Authors: Ganiyat Soliu, Glen Bright, Chiemela Onunka

Abstract:

Performance optimization plays a key role in controlling the waiting time during manufacturing in an advanced manufacturing environment to improve productivity. Queuing mathematical modeling theory was used to examine the performance of the multi-stage production line. Robotics as a disruptive technology was implemented into a virtual manufacturing scenario during the packaging process to study the effect of waiting time on productivity. The queuing mathematical model was used to determine the optimum service rate required by robots during the packaging stage of manufacturing to yield an optimum production cost. Different rates of production were assumed in a virtual manufacturing environment, cost of packaging was estimated with optimum production cost. An equation was generated using queuing mathematical modeling theory and the theorem adopted for analysis of the scenario is the Newton Raphson theorem. Queuing theory presented here provides an adequate analysis of the number of robots required to regulate waiting time in order to increase the number of output. Arrival rate of the product was fast which shows that queuing mathematical model was effective in minimizing service cost and the waiting time during manufacturing. At a reduced waiting time, there was an improvement in the number of products obtained per hour. The overall productivity was improved based on the assumptions used in the queuing modeling theory implemented in the virtual manufacturing scenario.

Keywords: performance optimization, productivity, queuing theory, robotics

Procedia PDF Downloads 123
223 A Modified Nonlinear Conjugate Gradient Algorithm for Large Scale Unconstrained Optimization Problems

Authors: Tsegay Giday Woldu, Haibin Zhang, Xin Zhang, Yemane Hailu Fissuh

Abstract:

It is well known that nonlinear conjugate gradient method is one of the widely used first order methods to solve large scale unconstrained smooth optimization problems. Because of the low memory requirement, attractive theoretical features, practical computational efficiency and nice convergence properties, nonlinear conjugate gradient methods have a special role for solving large scale unconstrained optimization problems. Large scale optimization problems are with important applications in practical and scientific world. However, nonlinear conjugate gradient methods have restricted information about the curvature of the objective function and they are likely less efficient and robust compared to some second order algorithms. To overcome these drawbacks, the new modified nonlinear conjugate gradient method is presented. The noticeable features of our work are that the new search direction possesses the sufficient descent property independent of any line search and it belongs to a trust region. Under mild assumptions and standard Wolfe line search technique, the global convergence property of the proposed algorithm is established. Furthermore, to test the practical computational performance of our new algorithm, numerical experiments are provided and implemented on the set of some large dimensional unconstrained problems. The numerical results show that the proposed algorithm is an efficient and robust compared with other similar algorithms.

Keywords: conjugate gradient method, global convergence, large scale optimization, sufficient descent property

Procedia PDF Downloads 179
222 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science

Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier

Abstract:

Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and compared

Keywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis

Procedia PDF Downloads 86
221 Development of Probability Distribution Models for Degree of Bending (DoB) in Chord Member of Tubular X-Joints under Bending Loads

Authors: Hamid Ahmadi, Amirreza Ghaffari

Abstract:

Fatigue life of tubular joints in offshore structures is not only dependent on the value of hot-spot stress, but is also significantly influenced by the through-the-thickness stress distribution characterized by the degree of bending (DoB). The DoB exhibits considerable scatter calling for greater emphasis in accurate determination of its governing probability distribution which is a key input for the fatigue reliability analysis of a tubular joint. Although the tubular X-joints are commonly found in offshore jacket structures, as far as the authors are aware, no comprehensive research has been carried out on the probability distribution of the DoB in tubular X-joints. What has been used so far as the probability distribution of the DoB in reliability analyses is mainly based on assumptions and limited observations, especially in terms of distribution parameters. In the present paper, results of parametric equations available for the calculation of the DoB have been used to develop probability distribution models for the DoB in the chord member of tubular X-joints subjected to four types of bending loads. Based on a parametric study, a set of samples was prepared and density histograms were generated for these samples using Freedman-Diaconis method. Twelve different probability density functions (PDFs) were fitted to these histograms. The maximum likelihood method was utilized to determine the parameters of fitted distributions. In each case, Kolmogorov-Smirnov test was used to evaluate the goodness of fit. Finally, after substituting the values of estimated parameters for each distribution, a set of fully defined PDFs have been proposed for the DoB in tubular X-joints subjected to bending loads.

Keywords: tubular X-joint, degree of bending (DoB), probability density function (PDF), Kolmogorov-Smirnov goodness-of-fit test

Procedia PDF Downloads 701
220 Addressing Cultural Discrimination in Research Design: The Responsibilities of Ethics Committees

Authors: Elspeth McInnes

Abstract:

Research design is central to ethical research. Discriminatory research design is a key risk for researchers examining diverse cultural groups without conscious commitment to anti-discrimination values or knowledge of their culture. Culturally discriminatory research design is defined here as research proceeding from negative assumptions about people on the basis of race, colour, ethnicity, nationality or religion. Such discrimination can be direct or indirect. Direct discrimination is the uncritical mobilization of dominant group negative stereotypes of cultural minorities. Indirect discrimination is the examination of policies or programs grounded in dominant culture negative stereotypes that have been uncritically accepted by the researchers. This paper draws on anonymized elements of planned research projects and considers both direct and indirect cultural discrimination in research design and the responsibilities of ethics committees. Human research ethics committees provide a point of scrutiny with responsibility to alert researchers to risks of basing research on negative cultural stereotypes, as well as protecting participants from being subjected to negative discourses about them. This issue has become an increasing concern in a globalizing world of human displacement and migration creating a rise in the presence of minority cultures in host countries. As a nation established through colonization and immigration Australia has a long history of negative cultural stereotypes of Indigenous Australians as well as a legacy of the White Australia policy, which still echoes in attitudes to each wave of non-European immigration. The task of eliminating cultural discrimination in research design is vital to sustaining research integrity and ensuring that research is not used to reinforce or justify cultural discrimination.

Keywords: cultural discrimination, cultural stereotypes, participant risk, research design

Procedia PDF Downloads 116
219 Reliability Based Analysis of Multi-Lane Reinforced Concrete Slab Bridges

Authors: Ali Mahmoud, Shadi Najjar, Mounir Mabsout, Kassim Tarhini

Abstract:

Empirical expressions for estimating the wheel load distribution and live-load bending moment are typically specified in highway bridge codes such as the AASHTO procedures. The purpose of this paper is to analyze the reliability levels that are inherent in reinforced concrete slab bridges that are designed based on the simplified empirical live load equations in the AASHTO LRFD procedures. To achieve this objective, bridges with multi-lanes (three and four lanes) and different spans are modeled using finite-element analysis (FEA) subjected to HS20 truck loading, tandem loading, and standard lane loading per AASHTO LRFD procedures. The FEA results are compared with the AASHTO LRFD moments in order to quantify the biases that might result from the simplifying assumptions adopted in AASHTO. A reliability analysis is conducted to quantify the reliability index for bridges designed using AASHTO procedures. To reach a consistent level of safety for three- and four-lane bridges, following a previous study restricted to one- and two-lane bridges, the live load factor in the design equation proposed by AASHTO LRFD will be assessed and revised if needed by alternating the live load factor for these lanes. The results will provide structural engineers with more consistent provisions to design concrete slab bridges or evaluate the load-carrying capacity of existing bridges.

Keywords: reliability analysis of concrete bridges, finite element modeling, reliability analysis, reinforced concrete bridge design, load carrying capacity

Procedia PDF Downloads 317
218 Climate Change Effects on Agriculture

Authors: Abdellatif Chebboub

Abstract:

Agricultural production is sensitive to weather and thus directly affected by climate change. Plausible estimates of these climate change impacts require combined use of climate, crop, and economic models. Results from previous studies vary substantially due to differences in models, scenarios, and data. This paper is part of a collective effort to systematically integrate these three types of models. We focus on the economic component of the assessment, investigating how nine global economic models of agriculture represent endogenous responses to seven standardized climate change scenarios produced by two climate and five crop models. These responses include adjustments in yields, area, consumption, and international trade. We apply biophysical shocks derived from the Intergovernmental Panel on Climate Change’s representative concentration pathway with end-of-century radiative forcing of 8.5 W/m2. The mean biophysical yield effect with no incremental CO2 fertilization is a 17% reduction globally by 2050 relative to a scenario with unchanging climate. Endogenous economic responses reduce yield loss to 11%, increase area of major crops by 11%, and reduce consumption by 3%. Agricultural production, cropland area, trade, and prices show the greatest degree of variability in response to climate change, and consumption the lowest. The sources of these differences include model structure and specification; in particular, model assumptions about ease of land use conversion, intensification, and trade. This study identifies where models disagree on the relative responses to climate shocks and highlights research activities needed to improve the representation of agricultural adaptation responses to climate change.

Keywords: climate change, agriculture, weather change, danger of climate change

Procedia PDF Downloads 294
217 The Concept of Accounting in Islamic Transactions

Authors: Ahmad Abdulkadir Ibrahim

Abstract:

The Islamic law of transactions laid down the methods and instruments of accounting and analyzed its basic assumptions in the modern world. There is a need to examine the implications of accounting initiatives in the Muslim world and attempt to outline the important characteristics of Islamic accounting and how Islamic accounting resolves the problem of measuring the cost of Murabaha goods in case of exchange rate variation. The research tends to discuss an analytical approach to the Islamic accounting concept as well as elaborating the jurisprudential matter and practical aspects of accounting in Islamic financial transactions. It also aims to alert the practitioners of accounting in the Islamic world to be aware of the concept of accounting in Islamic jurisprudence and its historical development. The methodology adopted in this research is the qualitative method through the consultation of relevant literature, which focuses on the thematic study of the subject matter. This is followed by an analysis and discussion of the contents of the materials used. It is concluded that Islamic accounting is unique in its norms as it has been characterized by fairness, accuracy in measuring tools, truthfulness, mutual trust, moderation in making a profit, and tolerance. It was also qualified by capacity and flexibility in terms of the tools and terminology used and invented by Islamic jurisprudence in the accounting system, which indicates its validity and consistency anytime and anywhere. An important conclusion of the research also lies in the refutation of the popular idea that an Italian writer known as Luca Pacilio was the first writer who developed the basis of double-entry due to the presented proofs by Muslim scholars of critical accounting developments, which cannot be ignored. It concludes further that Islamic jurisprudence draws the accounting system codified in the foundations of a market that is far from usury, fraud, cheating, and unfair competition in all areas.

Keywords: accounting, Islamic accounting, Islamic transactions, Islamic jurisprudence, double entry, murabaha, characteristics

Procedia PDF Downloads 44
216 Not Three Gods but One: Why Reductionism Does Not Serve Our Theological Discourse

Authors: Finley Lawson

Abstract:

The triune nature of God is one of the most complex doctrines of Christianity, and its complexity is further compounded when one considers the incarnation. However, many of the difficulties and paradoxes associated with our idea of the divine arise from our adherence to reductionist ontology. In order to move our theological discourse forward, in respect to divine and human nature, a holistic interpretation of our profession of faith is necessary. The challenge of a holistic interpretation is that it questions our ability to make any statement about the genuine, ontological individuation of persons (both divine and human), and in doing so raises the issue of whether we are, ontologically, bound to descend in to a form of pan(en)theism. In order to address the ‘inevitable’ slide in to pan(en)theism. The impact of two forms of holistic interpretation, Boolean and Non-Boolean, on our concept of personhood will be examined. Whilst a Boolean interpretation allows for a greater understanding of the relational nature of the Trinity, it is the Non-Boolean interpretation which has greater ontological significance. A Non-Boolean ontology, grounded in our scientific understanding of the nature of the world, shows our quest for individuation rests not in ontological fact but in epistemic need, and that it is our limited epistemology that drives our need to divide that which is ontologically indivisible. This discussion takes place within a ‘methodological’, rather than ‘doctrinal’ approach to science and religion - examining assumptions and methods that have shaped our language and beliefs about key doctrines, rather than seeking to reconcile particular Christian doctrines with particular scientific theories. Concluding that Non-Boolean holism is the more significant for our doctrine is, in itself, not enough. A world without division appears much removed from the distinct place of man and divine as espoused in our creedal affirmation, to this end, several possible interpretations for understanding Non-Boolean human – divine relations are tentatively put forward for consideration.

Keywords: holism, individuation, ontology, Trinitarian relations

Procedia PDF Downloads 230
215 Disrupting Patriarchy: Transforming Gender Oppression through Dialogue between Women and Men at a South African University

Authors: S. van Schalkwyk

Abstract:

On international levels and across disciplines gender scholars have argued that patriarchal scripts of masculinity and femininity are harmful as they negatively impact constructions of selfhood and relations between women and men. Patriarchal ideologies serve as a scaffolding for dominance and subordination and fuel violence against women. Toxic masculinity—social discourses of men as violent, unemotional, and sexually dominant—are embedded in South African culture and are rooted in the high rates of gender violence occurring in the country. Finding strategies that can open up space for the interrogation of toxic masculinity is crucial in order to disrupt the destructive consequences of patriarchy in educational and social contexts. The University of the Free State (UFS) in South Africa in collaboration with the non-profit organization Gender Reconciliation International conducted a year-long series of workshops with male and female students. The aim of these workshops was to facilitate healing between men and women through collective dialogue processes. Drawing on a collective biography methodology outlined by feminist poststructuralists, this paper explores the impact of these workshops on gender relations. Findings show that the students experienced significant psychological connections with others during these dialogues, through which they began to interrogate their own gendered conditioning and harmful patriarchal assumptions and practices. This paper enhances insights into the possibilities for disrupting patriarchy in South African universities through feminist collective research efforts.

Keywords: collective biography methodology, South Africa, toxic masculinity, transforming gender oppression, violence against women

Procedia PDF Downloads 460
214 Social Media Diffusion And Implications For Opinion Leadership In Northcentral Nigeria

Authors: Chuks Odiegwu-Enwerem

Abstract:

The classical notion of opinion leadership presupposes that the media is at the center of an effective and successful opinion leadership. Under this idea, an opinion leader is an active media user who consumes, understands, digests and interprets the messages for the understanding and acceptance/adoption by lower-end media users – whose access and understanding of media content are supposedly low. Because of their unique access to and presumed understanding of media functions and their content, opinion leaders are typically esteemed by those who look forward to and accept their opinions. Lazarsfeld and Katz’s two-step flow of communication theory is the basis of opinion leadership – propelled by limited access to the media. With the emergence and spread of social media and its unlimited access by all and sundry, however, the study interrogates the relevance and application of opinion leaders and, by implication, the two-step flow communication theory in Nigeria’s Northcentral region. It seeks to determine whether opinion leaders still exist in the picture and if they still exert considerable influence, especially in matters of political conversations and decision-making among the citizens of this area. It further explores whether the diffusion of social media is a reality and how the ‘low-end’ media users react to the new-found freedom of access to media, and how they are using it to inform their decisions on important matters as well as examines if they are still glued to their opinion leaders. This study explores the empirical dimensions of the two-step flow hypothesis in relation to the activities of social media to determine if a change has occurred and in what direction, using mixed methos of Survey and in-depth interviews. Our understanding and belief in some theoretical assumptions may be enhanced or challenged by the study outcome.

Keywords: Opinion Leadership, Active Media User, Two-Step-Flow, Social media, Northcentral Nigeria

Procedia PDF Downloads 49
213 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration

Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine

Abstract:

The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.

Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions

Procedia PDF Downloads 175
212 Probabilistic Fracture Evaluation of Reactor Pressure Vessel Subjected to Pressurized Thermal Shock

Authors: Jianguo Chen, Fenggang Zang, Yu Yang, Liangang Zheng

Abstract:

Reactor Pressure Vessel (RPV) is an important security barrier in nuclear power plant. Crack like defects may be produced on RPV during the whole operation lifetime due to the harsh operation condition and irradiation embrittlement. During the severe loss of coolant accident, thermal shock happened as the injection of emergency cooling water into RPV, which results in re-pressurization of the vessel and very high tension stress on the vessel wall, this event called Pressurized Thermal Shock (PTS). Crack on the vessel wall may propagate even penetrate the vessel, so the safety of the RPV would undergo great challenge. Many assumptions in structure integrity evaluation make the result of deterministic fracture mechanics very conservative, which affect the operation lifetime of the plant. Actually, many parameters in the evaluation process, such as fracture toughness and nil-ductility transition temperature, have statistical distribution characteristics. So it is necessary to assess the structural integrity of RPV subjected to PTS event by means of Probabilistic Fracture Mechanics (PFM). Structure integrity evaluation methods of RPV subjected to PTS event are summarized firstly, then evaluation method based on probabilistic fracture mechanics are presented by considering the probabilistic characteristics of material and structure parameters. A comprehensive analysis example is carried out at last. The results show that the probability of crack penetrates through wall increases gradually with the growth of fast neutron irradiation flux. The results give advice for reactor life extension.

Keywords: fracture toughness, integrity evaluation, pressurized thermal shock, probabilistic fracture mechanics, reactor pressure vessel

Procedia PDF Downloads 226
211 Agent-Based Modeling to Simulate the Dynamics of Health Insurance Markets

Authors: Haripriya Chakraborty

Abstract:

The healthcare system in the United States is considered to be one of the most inefficient and expensive systems when compared to other developed countries. Consequently, there are persistent concerns regarding the overall functioning of this system. For instance, the large number of uninsured individuals and high premiums are pressing issues that are shown to have a negative effect on health outcomes with possible life-threatening consequences. The Affordable Care Act (ACA), which was signed into law in 2010, was aimed at improving some of these inefficiencies. This paper aims at providing a computational mechanism to examine some of these inefficiencies and the effects that policy proposals may have on reducing these inefficiencies. Agent-based modeling is an invaluable tool that provides a flexible framework to model complex systems. It can provide an important perspective into the nature of some interactions that occur and how the benefits of these interactions are allocated. In this paper, we propose a novel and versatile agent-based model with realistic assumptions to simulate the dynamics of a health insurance marketplace that contains a mixture of private and public insurers and individuals. We use this model to analyze the characteristics, motivations, payoffs, and strategies of these agents. In addition, we examine the effects of certain policies, including some of the provisions of the ACA, aimed at reducing the uninsured rate and the cost of premiums to move closer to a system that is more equitable and improves health outcomes for the general population. Our test results confirm the usefulness of our agent-based model in studying this complicated issue and suggest some implications for public policies aimed at healthcare reform.

Keywords: agent-based modeling, healthcare reform, insurance markets, public policy

Procedia PDF Downloads 119
210 Factors Affecting the Adoption of Cloud Business Intelligence among Healthcare Sector: A Case Study of Saudi Arabia

Authors: Raed Alsufyani, Hissam Tawfik, Victor Chang, Muthu Ramachandran

Abstract:

This study investigates the factors that influence the decision by players in the healthcare sector to embrace Cloud Business Intelligence Technology with a focus on healthcare organizations in Saudi Arabia. To bring this matter into perspective, this study primarily considers the Technology-Organization-Environment (TOE) framework and the Human Organization-Technology (HOT) fit model. A survey was hypothetically designed based on literature review and was carried out online. Quantitative data obtained was processed from descriptive and one-way frequency statistics to inferential and regression analysis. Data were analysed to establish factors that influence the decision to adopt Cloud Business intelligence technology in the healthcare sector. The implication of the identified factors was measured, and all assumptions were tested. 66.70% of participants in healthcare organization backed the intention to adopt cloud business intelligence system. 99.4% of these participants considered security concerns and privacy risk have been the most significant factors in the adoption of cloud Business Intelligence (CBI) system. Through regression analysis hypothesis testing point that usefulness, service quality, relative advantage, IT infrastructure preparedness, organization structure; vendor support, perceived technical competence, government support, and top management support positively and significantly influence the adoption of (CBI) system. The paper presents quantitative phase that is a part of an on-going project. The project will be based on the consequences learned from this study.

Keywords: cloud computing, business intelligence, HOT-fit model, TOE, healthcare and innovation adoption

Procedia PDF Downloads 146
209 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City

Authors: Christian Kapuku, Seung-Young Kho

Abstract:

An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.

Keywords: geographic information system (GIS), network construction, transportation database, open source data

Procedia PDF Downloads 150
208 Elasticity Model for Easing Peak Hour Demand for Metrorail Transport System

Authors: P. K. Sarkar, Amit Kumar Jain

Abstract:

The demand for Urban transportation is characterised by a large scale temporal and spatial variations which causes heavy congestion inside metro trains in peak hours near Centre Business District (CBD) of the city. The conventional approach to address peak hour congestion, metro trains has been to increase the supply by way of introduction of more trains, increasing the length of the trains, optimising the time table to increase the capacity of the system. However, there is a limitation of supply side measures determined by the design capacity of the systems beyond which any addition in the capacity requires huge capital investments. The demand side interventions are essentially required to actually spread the demand across the time and space. In this study, an attempt has been made to identify the potential Transport Demand Management tools applicable to Urban Rail Transportation systems with a special focus on differential pricing. A conceptual price elasticity model has been developed to analyse the effect of various combinations of peak and nonpeak hoursfares on demands. The elasticity values for peak hour, nonpeak hour and cross elasticity have been assumed from the relevant literature available in the field. The conceptual price elasticity model so developed is based on assumptions which need to be validated with actual values of elasticities for different segments of passengers. Once validated, the model can be used to determine the peak and nonpeak hour fares with an objective to increase overall ridership, revenue, demand levelling and optimal utilisation of assets.

Keywords: urban transport, differential fares, congestion, transport demand management, elasticity

Procedia PDF Downloads 289
207 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 101
206 Infrastructure Sharing Synergies: Optimal Capacity Oversizing and Pricing

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) deals with both substitution synergies (exchange of waste materials, fatal energy and utilities as resources for production) and infrastructure/service sharing synergies. The latter is based on the intensification of use of an asset and thus requires to balance capital costs increments with snowball effects (network externalities) for its implementation. Initial investors must specify ex-ante arrangements (cost sharing and pricing schedule) to commit toward investments in capacities and transactions. Our model investigate the decision of 2 actors trying to choose cooperatively a level of infrastructure capacity oversizing to set a plug-and-play offer to a potential entrant whose capacity requirement is randomly distributed while satisficing their own requirements. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period. The entrant’s willingness-to-pay for the access to the infrastructure is dependent upon its standalone cost and the capacity gap that it must complete in case the available capacity is insufficient ex-post (the complement cost). Since initial capacity choices are driven by ex-ante (expected) yield extractible from the entrant we derive the expected complement cost function which helps us defining the investors’ objective function. We first show that this curve is decreasing and convex in the capacity increments and that it is shaped by the distribution function of the potential entrant’s requirements. We then derive the general form of solutions and solve the model for uniform and triangular distributions. Depending on requirements volumes and cost assumptions different equilibria occurs. We finally analyze the effect of a per-unit subsidy a public actor would apply to foster such sharing synergies.

Keywords: capacity, cooperation, industrial symbiosis, pricing

Procedia PDF Downloads 188
205 Comparison between Some of Robust Regression Methods with OLS Method with Application

Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq

Abstract:

The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.

Keywords: Robest, LTS, M estimate, MSE

Procedia PDF Downloads 217