Search results for: likelihood method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18861

Search results for: likelihood method

18681 The Implementation of Secton Method for Finding the Root of Interpolation Function

Authors: Nur Rokhman

Abstract:

A mathematical function gives relationship between the variables composing the function. Interpolation can be viewed as a process of finding mathematical function which goes through some specified points. There are many interpolation methods, namely: Lagrange method, Newton method, Spline method etc. For some specific condition, such as, big amount of interpolation points, the interpolation function can not be written explicitly. This such function consist of computational steps. The solution of equations involving the interpolation function is a problem of solution of non linear equation. Newton method will not work on the interpolation function, for the derivative of the interpolation function cannot be written explicitly. This paper shows the use of Secton method to determine the numerical solution of the function involving the interpolation function. The experiment shows the fact that Secton method works better than Newton method in finding the root of Lagrange interpolation function.

Keywords: Secton method, interpolation, non linear function, numerical solution

Procedia PDF Downloads 352
18680 Parameter Estimation for Contact Tracing in Graph-Based Models

Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar

Abstract:

We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.

Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference

Procedia PDF Downloads 47
18679 Ductility Spectrum Method for the Design and Verification of Structures

Authors: B. Chikh, L. Moussa, H. Bechtoula, Y. Mehani, A. Zerzour

Abstract:

This study presents a new method, applicable to evaluation and design of structures has been developed and illustrated by comparison with the capacity spectrum method (CSM, ATC-40). This method uses inelastic spectra and gives peak responses consistent with those obtained when using the nonlinear time history analysis. Hereafter, the seismic demands assessment method is called in this paper DSM, Ductility Spectrum Method. It is used to estimate the seismic deformation of Single-Degree-Of-Freedom (SDOF) systems based on DDRS, Ductility Demand Response Spectrum, developed by the author.

Keywords: seismic demand, capacity, inelastic spectra, design and structure

Procedia PDF Downloads 373
18678 Surprise Fraudsters Before They Surprise You: A South African Telecommunications Case Study

Authors: Ansoné Human, Nantes Kirsten, Tanja Verster, Willem D. Schutte

Abstract:

Every year the telecommunications industry suffers huge losses due to fraud. Mobile fraud, or generally, telecommunications fraud is the utilisation of telecommunication products or services to acquire money illegally from or failing to pay a telecommunication company. A South African telecommunication operator developed two internal fraud scorecards to mitigate future risks of application fraud events. The scorecards aim to predict the likelihood of an application being fraudulent and surprise fraudsters before they surprise the telecommunication operator by identifying fraud at the time of application. The scorecards are utilised in the vetting process to evaluate the applicant in terms of the fraud risk the applicant would present to the telecommunication operator. Telecommunication providers can utilise these scorecards to profile customers, as well as isolate fraudulent and/or high-risk applicants. We provide the complete methodology utilised in the development of the scorecards. Furthermore, a Determination and Discrimination (DD) ratio is provided in the methodology to select the most influential variables from a group of related variables. Throughout the development of these scorecards, the following was revealed regarding fraudulent cases and fraudster behaviour within the telecommunications industry: Fraudsters typically target high-value handsets. Furthermore, debit order dates scheduled for the end of the month have the highest fraud probability. The fraudsters target specific stores. Applicants who acquire an expensive package and receive a medium-income, as well as applicants who obtain an expensive package and receive a high income, have higher fraud percentages. If one month prior to application, the status of an account is already in arrears (two months or more), the applicant has a high probability of fraud. The applicants with the highest average spend on calls have a higher probability of fraud. If the amount collected changes from month to month, the likelihood of fraud is higher. Lastly, young and middle-aged applicants have an increased probability of being targeted by fraudsters than other ages.

Keywords: application fraud scorecard, predictive modeling, regression, telecommunications

Procedia PDF Downloads 85
18677 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 43
18676 Selection of Designs in Ordinal Regression Models under Linear Predictor Misspecification

Authors: Ishapathik Das

Abstract:

The purpose of this article is to find a method of comparing designs for ordinal regression models using quantile dispersion graphs in the presence of linear predictor misspecification. The true relationship between response variable and the corresponding control variables are usually unknown. Experimenter assumes certain form of the linear predictor of the ordinal regression models. The assumed form of the linear predictor may not be correct always. Thus, the maximum likelihood estimates (MLE) of the unknown parameters of the model may be biased due to misspecification of the linear predictor. In this article, the uncertainty in the linear predictor is represented by an unknown function. An algorithm is provided to estimate the unknown function at the design points where observations are available. The unknown function is estimated at all points in the design region using multivariate parametric kriging. The comparison of the designs are based on a scalar valued function of the mean squared error of prediction (MSEP) matrix, which incorporates both variance and bias of the prediction caused by the misspecification in the linear predictor. The designs are compared using quantile dispersion graphs approach. The graphs also visually depict the robustness of the designs on the changes in the parameter values. Numerical examples are presented to illustrate the proposed methodology.

Keywords: model misspecification, multivariate kriging, multivariate logistic link, ordinal response models, quantile dispersion graphs

Procedia PDF Downloads 357
18675 Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners

Authors: Saheed A. Gbadegeshin

Abstract:

Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.

Keywords: commercialization method, technology, knowledge, intellectual property, innovation, invention

Procedia PDF Downloads 303
18674 Critical Comparison of Two Teaching Methods: The Grammar Translation Method and the Communicative Teaching Method

Authors: Aicha Zohbie

Abstract:

The purpose of this paper is to critically compare two teaching methods: the communicative method and the grammar-translation method. The paper presents the importance of language awareness as an approach to teaching and learning language and some challenges that language teachers face. In addition, the paper strives to determine whether the adoption of communicative teaching methods or the grammar teaching method would be more effective to teach a language. A variety of features are considered for comparing the two methods: the purpose of each method, techniques used, teachers’ and students’ roles, the use of L1, the skills that are emphasized, the correction of students’ errors, and the students’ assessments. Finally, the paper includes suggestions and recommendations for implementing an approach that best meets the students’ needs in a classroom.

Keywords: language teaching methods, language awareness, communicative method grammar translation method, advantages and disadvantages

Procedia PDF Downloads 111
18673 Risk Factors for Fall in Elderly with Diabetes Mellitus Type 2 in Jeddah Saudi Arabia 2022: A Cross-Sectional Study

Authors: Rami S. Alasmari, Abdullah Al Zahrani, Hattan A. Hassani, Hattan A. Hassani, Nawwaf A. Almalky, Abdullah F. Bokhari, Alwalied A. Hafez

Abstract:

Diabetes mellitus type 2 (DMT2) is a major chronic condition that is considered common among elderly people, with multiple potential complications that could contribute to falls. However, this concept is not well understood, thus, the aim of this study is to determine whether diabetes is an independent risk factor for falls in elderly. In this observational cross-sectional study, 309 diabetic patients aged 60 or more who visited the primary healthcare centers of the Ministry of National Guard Health Affairs in Jeddah were chosen via convenience sampling method. To collect the data, Semi-structured Fall Risk Assessment questionnaire and Fall Efficacy Score scale were used. The mean age of the participants was estimated to be 68.5 (SD:7.4) years. Among the participants, 48.2% experienced falling before, and 63.1% of them suffered falls in the past 12-months. The results showed that gait problems were independently associated with a higher likelihood of fall among the elderly patients (OR = 1.98, 95%CI, 1.08 to 3.62, p = 0.026. This paper suggests that diabetes mellitus is an independent fall risk factor among elderly. Therefore, identifying such patients as being at higher risk and prompt referral to a specialist falls clinic is recommended.

Keywords: diabetes, fall, elderly, risk factors

Procedia PDF Downloads 62
18672 Numerical Iteration Method to Find New Formulas for Nonlinear Equations

Authors: Kholod Mohammad Abualnaja

Abstract:

A new algorithm is presented to find some new iterative methods for solving nonlinear equations F(x)=0 by using the variational iteration method. The efficiency of the considered method is illustrated by example. The results show that the proposed iteration technique, without linearization or small perturbation, is very effective and convenient.

Keywords: variational iteration method, nonlinear equations, Lagrange multiplier, algorithms

Procedia PDF Downloads 511
18671 Comparison of Finite-Element and IEC Methods for Cable Thermal Analysis under Various Operating Environments

Authors: M. S. Baazzim, M. S. Al-Saud, M. A. El-Kady

Abstract:

In this paper, steady-state ampacity (current carrying capacity) evaluation of underground power cable system by using analytical and numerical methods for different conditions (depth of cable, spacing between phases, soil thermal resistivity, ambient temperature, wind speed), for two system voltage level were used 132 and 380 kV. The analytical method or traditional method that was used is based on the thermal analysis method developed by Neher-McGrath and further enhanced by International Electrotechnical Commission (IEC) and published in standard IEC 60287. The numerical method that was used is finite element method and it was recourse commercial software based on finite element method.

Keywords: cable ampacity, finite element method, underground cable, thermal rating

Procedia PDF Downloads 343
18670 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia

Authors: Nguyen-Thanh Son

Abstract:

Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.

Keywords: MODIS, flood, mapping, Cambodia

Procedia PDF Downloads 102
18669 Multistage Adomian Decomposition Method for Solving Linear and Non-Linear Stiff System of Ordinary Differential Equations

Authors: M. S. H. Chowdhury, Ishak Hashim

Abstract:

In this paper, linear and non-linear stiff systems of ordinary differential equations are solved by the classical Adomian decomposition method (ADM) and the multi-stage Adomian decomposition method (MADM). The MADM is a technique adapted from the standard Adomian decomposition method (ADM) where standard ADM is converted into a hybrid numeric-analytic method called the multistage ADM (MADM). The MADM is tested for several examples. Comparisons with an explicit Runge-Kutta-type method (RK) and the classical ADM demonstrate the limitations of ADM and promising capability of the MADM for solving stiff initial value problems (IVPs).

Keywords: stiff system of ODEs, Runge-Kutta Type Method, Adomian decomposition method, Multistage ADM

Procedia PDF Downloads 402
18668 A Method for Measurement and Evaluation of Drape of Textiles

Authors: L. Fridrichova, R. Knížek, V. Bajzík

Abstract:

Drape is one of the important visual characteristics of the fabric. This paper is introducing an innovative method of measurement and evaluation of the drape shape of the fabric. The measuring principle is based on the possibility of multiple vertical strain of the fabric. This method more accurately simulates the real behavior of the fabric in the process of draping. The method is fully automated, so the sample can be measured by using any number of cycles in any time horizon. Using the present method of measurement, we are able to describe the viscoelastic behavior of the fabric.

Keywords: drape, drape shape, automated drapemeter, fabric

Procedia PDF Downloads 621
18667 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates

Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde

Abstract:

In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.

Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia

Procedia PDF Downloads 145
18666 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments

Authors: Xiaoqin Wang, Li Yin

Abstract:

Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.

Keywords: causal effect, point effect, statistical modelling, sequential causal inference

Procedia PDF Downloads 174
18665 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 299
18664 Predicting Expectations of Non-Monogamy in Long-Term Romantic Relationships

Authors: Michelle R. Sullivan

Abstract:

Positive romantic relationships and marriages offer a buffer against a host of physical and emotional difficulties. Conversely, poor relationship quality and marital discord can have deleterious consequences for individuals and families. Research has described non-monogamy, infidelity, and consensual non-monogamy, as both consequential and causal of relationship difficulty, or as a unique way a couple strives to make a relationship work. Much research on consensual non-monogamy has built on feminist theory and critique. To the author’s best knowledge, to date, no studies have examined the predictive relationship between individual and relationship characteristics and expectations of non-monogamy. The current longitudinal study: 1) estimated the prevalence of expectations of partner non-monogamy and 2) evaluated whether gender, sexual identity, age, education, how a couple met, and relationship quality were predictive expectations of partner non-monogamy. This study utilized the publically available longitudinal dataset, How Couples Meet and Stay Together. Adults aged 18- to 98-years old (n=4002) were surveyed by phone over 5 waves from 2009-2014. Demographics and how a couple met were gathered through self-report in Wave 1, and relationship quality and expectations of partner non-monogamy were gathered through self-report in Waves 4 and 5 (n=1047). The prevalence of expectations of partner non-monogamy (encompassing both infidelity and consensual non-monogamy) was 4.8%. Logistic regression models indicated that sexual identity, gender, education, and relationship quality were significantly predictive of expectations of partner non-monogamy. Specifically, male gender, lower education, identifying as lesbian, gay, or bisexual, and a lower relationship quality scores were predictive of expectations of partner non-monogamy. Male gender was not predictive of expectations of partner non-monogamy in the follow up logistic regression model. Age and whether a couple met online were not associated with expectations of partner non-monogamy. Clinical implications include awareness of the increased likelihood of lesbian, gay, and bisexual individuals to have an expectation of non-monogamy and the sequelae of relationship dissatisfaction that may be related. Future research directions could differentiate between non-monogamy subtypes and the person and relationship variables that lead to the likelihood of consensual non-monogamy and infidelity as separate constructs, as well as explore the relationship between predicting partner behavior and actual partner behavioral outcomes.

Keywords: open relationship, polyamory, infidelity, relationship satisfaction

Procedia PDF Downloads 134
18663 Zero-Dissipative Explicit Runge-Kutta Method for Periodic Initial Value Problems

Authors: N. Senu, I. A. Kasim, F. Ismail, N. Bachok

Abstract:

In this paper zero-dissipative explicit Runge-Kutta method is derived for solving second-order ordinary differential equations with periodical solutions. The phase-lag and dissipation properties for Runge-Kutta (RK) method are also discussed. The new method has algebraic order three with dissipation of order infinity. The numerical results for the new method are compared with existing method when solving the second-order differential equations with periodic solutions using constant step size.

Keywords: dissipation, oscillatory solutions, phase-lag, Runge-Kutta methods

Procedia PDF Downloads 377
18662 Reflection on Using Bar Model Method in Learning and Teaching Primary Mathematics: A Hong Kong Case Study

Authors: Chui Ka Shing

Abstract:

This case study research attempts to examine the use of the Bar Model Method approach in learning and teaching mathematics in a primary school in Hong Kong. The objectives of the study are to find out to what extent (a) the Bar Model Method approach enhances the construction of students’ mathematics concepts, and (b) the school-based mathematics curriculum development with adopting the Bar Model Method approach. This case study illuminates the effectiveness of using the Bar Model Method to solve mathematics problems from Primary 1 to Primary 6. Some effective pedagogies and assessments were developed to strengthen the use of the Bar Model Method across year levels. Suggestions including school-based curriculum development for using Bar Model Method and further study were discussed.

Keywords: bar model method, curriculum development, mathematics education, problem solving

Procedia PDF Downloads 187
18661 An Analytical Method for Bending Rectangular Plates with All Edges Clamped Supported

Authors: Yang Zhong, Heng Liu

Abstract:

The decoupling method and the modified Naiver method are combined for accurate bending analysis of rectangular thick plates with all edges clamped supported. The basic governing equations for Mindlin plates are first decoupled into independent partial differential equations which can be solved separately. Using modified Navier method, the analytic solution of rectangular thick plate with all edges clamped supported is then derived. The solution method used in this paper leave out the complicated derivation for calculating coefficients and obtain the solution to problems directly. Numerical comparisons show the correctness and accuracy of the results at last.

Keywords: Mindlin plates, decoupling method, modified Navier method, bending rectangular plates

Procedia PDF Downloads 561
18660 Modern Methods of Technology and Organization of Production of Construction Works during the Implementation of Construction 3D Printers

Authors: Azizakhanim Maharramli

Abstract:

The gradual transition from entrenched traditional technology and organization of construction production to innovative additive construction technology inevitably meets technological, technical, organizational, labour, and, finally, social difficulties. Therefore, the chosen nodal method will lead to the elimination of the above difficulties, combining some of the usual methods of construction and the myth in world practice that the labour force is subjected to a strong stream of reduction. The nodal method of additive technology will create favourable conditions for the optimal degree of distribution of labour across facilities due to the consistent performance of homogeneous work and the introduction of additive technology and traditional technology into construction production.

Keywords: parallel method, sequential method, stream method, combined method, nodal method

Procedia PDF Downloads 52
18659 About Some Results of the Determination of Alcohol in Moroccan Gasoline-Alcohol Mixtures

Authors: Mahacine Amrani

Abstract:

A simple and rapid method for the determination of alcohol in gasoline-alcohol mixtures using density measurements is described. The method can determine a minimum of 1% of alcohol by volume. The precision of the method is ± 3%.The method is more useful for field test in the quality assessment of alcohol blended fuels.

Keywords: gasoline-alcohol, mixture, alcohol determination, density, measurement, Morocco

Procedia PDF Downloads 278
18658 Infilling Strategies for Surrogate Model Based Multi-disciplinary Analysis and Applications to Velocity Prediction Programs

Authors: Malo Pocheau-Lesteven, Olivier Le Maître

Abstract:

Engineering and optimisation of complex systems is often achieved through multi-disciplinary analysis of the system, where each subsystem is modeled and interacts with other subsystems to model the complete system. The coherence of the output of the different sub-systems is achieved through the use of compatibility constraints, which enforce the coupling between the different subsystems. Due to the complexity of some sub-systems and the computational cost of evaluating their respective models, it is often necessary to build surrogate models of these subsystems to allow repeated evaluation these subsystems at a relatively low computational cost. In this paper, gaussian processes are used, as their probabilistic nature is leveraged to evaluate the likelihood of satisfying the compatibility constraints. This paper presents infilling strategies to build accurate surrogate models of the subsystems in areas where they are likely to meet the compatibility constraint. It is shown that these infilling strategies can reduce the computational cost of building surrogate models for a given level of accuracy. An application of these methods to velocity prediction programs used in offshore racing naval architecture further demonstrates these method's applicability in a real engineering context. Also, some examples of the application of uncertainty quantification to field of naval architecture are presented.

Keywords: infilling strategy, gaussian process, multi disciplinary analysis, velocity prediction program

Procedia PDF Downloads 128
18657 The Finite Element Method for Nonlinear Fredholm Integral Equation of the Second Kind

Authors: Melusi Khumalo, Anastacia Dlamini

Abstract:

In this paper, we consider a numerical solution for nonlinear Fredholm integral equations of the second kind. We work with uniform mesh and use the Lagrange polynomials together with the Galerkin finite element method, where the weight function is chosen in such a way that it takes the form of the approximate solution but with arbitrary coefficients. We implement the finite element method to the nonlinear Fredholm integral equations of the second kind. We consider the error analysis of the method. Furthermore, we look at a specific example to illustrate the implementation of the finite element method.

Keywords: finite element method, Galerkin approach, Fredholm integral equations, nonlinear integral equations

Procedia PDF Downloads 340
18656 An Online 3D Modeling Method Based on a Lossless Compression Algorithm

Authors: Jiankang Wang, Hongyang Yu

Abstract:

This paper proposes a portable online 3D modeling method. The method first utilizes a depth camera to collect data and compresses the depth data using a frame-by-frame lossless data compression method. The color image is encoded using the H.264 encoding format. After the cloud obtains the color image and depth image, a 3D modeling method based on bundlefusion is used to complete the 3D modeling. The results of this study indicate that this method has the characteristics of portability, online, and high efficiency and has a wide range of application prospects.

Keywords: 3D reconstruction, bundlefusion, lossless compression, depth image

Procedia PDF Downloads 49
18655 Flood Hazard Assessment and Land Cover Dynamics of the Orai Khola Watershed, Bardiya, Nepal

Authors: Loonibha Manandhar, Rajendra Bhandari, Kumud Raj Kafle

Abstract:

Nepal’s Terai region is a part of the Ganges river basin which is one of the most disaster-prone areas of the world, with recurrent monsoon flooding causing millions in damage and the death and displacement of hundreds of people and households every year. The vulnerability of human settlements to natural disasters such as floods is increasing, and mapping changes in land use practices and hydro-geological parameters is essential in developing resilient communities and strong disaster management policies. The objective of this study was to develop a flood hazard zonation map of Orai Khola watershed and map the decadal land use/land cover dynamics of the watershed. The watershed area was delineated using SRTM DEM, and LANDSAT images were classified into five land use classes (forest, grassland, sediment and bare land, settlement area and cropland, and water body) using pixel-based semi-automated supervised maximum likelihood classification. Decadal changes in each class were then quantified using spatial modelling. Flood hazard mapping was performed by assigning weights to factors slope, rainfall distribution, distance from the river and land use/land cover on the basis of their estimated influence in causing flood hazard and performing weighed overlay analysis to identify areas that are highly vulnerable. The forest and grassland coverage increased by 11.53 km² (3.8%) and 1.43 km² (0.47%) from 1996 to 2016. The sediment and bare land areas decreased by 12.45 km² (4.12%) from 1996 to 2016 whereas settlement and cropland areas showed a consistent increase to 14.22 km² (4.7%). Waterbody coverage also increased to 0.3 km² (0.09%) from 1996-2016. 1.27% (3.65 km²) of total watershed area was categorized into very low hazard zone, 20.94% (60.31 km²) area into low hazard zone, 37.59% (108.3 km²) area into moderate hazard zone, 29.25% (84.27 km²) area into high hazard zone and 31 villages which comprised 10.95% (31.55 km²) were categorized into high hazard zone area.

Keywords: flood hazard, land use/land cover, Orai river, supervised maximum likelihood classification, weighed overlay analysis

Procedia PDF Downloads 316
18654 A Method for Modeling Flexible Manipulators: Transfer Matrix Method with Finite Segments

Authors: Haijie Li, Xuping Zhang

Abstract:

This paper presents a computationally efficient method for the modeling of robot manipulators with flexible links and joints. This approach combines the Discrete Time Transfer Matrix Method with the Finite Segment Method, in which the flexible links are discretized by a number of rigid segments connected by torsion springs; and the flexibility of joints are modeled by torsion springs. The proposed method avoids the global dynamics and has the advantage of modeling non-uniform manipulators. Experiments and simulations of a single-link flexible manipulator are conducted for verifying the proposed methodologies. The simulations of a three-link robot arm with links and joints flexibility are also performed.

Keywords: flexible manipulator, transfer matrix method, linearization, finite segment method

Procedia PDF Downloads 403
18653 Extreme Value Modelling of Ghana Stock Exchange Indices

Authors: Kwabena Asare, Ezekiel N. N. Nortey, Felix O. Mettle

Abstract:

Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana Stock Exchange All-Shares indices (2000-2010) by applying the Extreme Value Theory to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before EVT method was applied. The Peak Over Threshold (POT) approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model’s goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the Value at Risk (VaR) and Expected Shortfall (ES) risk measures at some high quantiles, based on the fitted GPD model.

Keywords: extreme value theory, expected shortfall, generalized pareto distribution, peak over threshold, value at risk

Procedia PDF Downloads 514
18652 Dynamic Response Analysis of Structure with Random Parameters

Authors: Ahmed Guerine, Ali El Hafidi, Bruno Martin, Philippe Leclaire

Abstract:

In this paper, we propose a method for the dynamic response of multi-storey structures with uncertain-but-bounded parameters. The effectiveness of the proposed method is demonstrated by a numerical example of three-storey structures. This equation is integrated numerically using Newmark’s method. The numerical results are obtained by the proposed method. The simulation accounting the interval analysis method results are compared with a probabilistic approach results. The interval analysis method provides a mean curve that is between an upper and lower bound obtained from the probabilistic approach.

Keywords: multi-storey structure, dynamic response, interval analysis method, random parameters

Procedia PDF Downloads 159