Search results for: vector error correction model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18637

Search results for: vector error correction model

15667 Predicting Returns Volatilities and Correlations of Stock Indices Using Multivariate Conditional Autoregressive Range and Return Models

Authors: Shay Kee Tan, Kok Haur Ng, Jennifer So-Kuen Chan

Abstract:

This paper extends the conditional autoregressive range (CARR) model to multivariate CARR (MCARR) model and further to the two-stage MCARR-return model to model and forecast volatilities, correlations and returns of multiple financial assets. The first stage model fits the scaled realised Parkinson volatility measures using individual series and their pairwise sums of indices to the MCARR model to obtain in-sample estimates and forecasts of volatilities for these individual and pairwise sum series. Then covariances are calculated to construct the fitted variance-covariance matrix of returns which are imputed into the stage-two return model to capture the heteroskedasticity of assets’ returns. We investigate different choices of mean functions to describe the volatility dynamics. Empirical applications are based on the Standard and Poor 500, Dow Jones Industrial Average and Dow Jones United States Financial Service Indices. Results show that the stage-one MCARR models using asymmetric mean functions give better in-sample model fits than those based on symmetric mean functions. They also provide better out-of-sample volatility forecasts than those using CARR models based on two robust loss functions with the scaled realised open-to-close volatility measure as the proxy for the unobserved true volatility. We also find that the stage-two return models with constant means and multivariate Student-t errors give better in-sample fits than the Baba, Engle, Kraft, and Kroner type of generalized autoregressive conditional heteroskedasticity (BEKK-GARCH) models. The estimates and forecasts of value-at-risk (VaR) and conditional VaR based on the best MCARR-return models for each asset are provided and tested using Kupiec test to confirm the accuracy of the VaR forecasts.

Keywords: range-based volatility, correlation, multivariate CARR-return model, value-at-risk, conditional value-at-risk

Procedia PDF Downloads 96
15666 Performance Evaluation of Contemporary Classifiers for Automatic Detection of Epileptic EEG

Authors: K. E. Ch. Vidyasagar, M. Moghavvemi, T. S. S. T. Prabhat

Abstract:

Epilepsy is a global problem, and with seizures eluding even the smartest of diagnoses a requirement for automatic detection of the same using electroencephalogram (EEG) would have a huge impact in diagnosis of the disorder. Among a multitude of methods for automatic epilepsy detection, one should find the best method out, based on accuracy, for classification. This paper reasons out, and rationalizes, the best methods for classification. Accuracy is based on the classifier, and thus this paper discusses classifiers like quadratic discriminant analysis (QDA), classification and regression tree (CART), support vector machine (SVM), naive Bayes classifier (NBC), linear discriminant analysis (LDA), K-nearest neighbor (KNN) and artificial neural networks (ANN). Results show that ANN is the most accurate of all the above stated classifiers with 97.7% accuracy, 97.25% specificity and 98.28% sensitivity in its merit. This is followed closely by SVM with 1% variation in result. These results would certainly help researchers choose the best classifier for detection of epilepsy.

Keywords: classification, seizure, KNN, SVM, LDA, ANN, epilepsy

Procedia PDF Downloads 517
15665 Analysis of Aerodynamic Forces Acting on a Train Passing Through a Tornado

Authors: Masahiro Suzuki, Nobuyuki Okura

Abstract:

The crosswind effect on ground transportations has been extensively investigated for decades. The effect of tornado, however, has been hardly studied in spite of the fact that even heavy ground vehicles, namely, trains were overturned by tornadoes with casualties in the past. Therefore, aerodynamic effects of the tornado on the train were studied by several approaches in this study. First, an experimental facility was developed to clarify aerodynamic forces acting on a vehicle running through a tornado. Our experimental set-up consists of two apparatus. One is a tornado simulator, and the other is a moving model rig. PIV measurements showed that the tornado simulator can generate a swirling-flow field similar to those of the natural tornadoes. The flow field has the maximum tangential velocity of 7.4 m/s and the vortex core radius of 96 mm. The moving model rig makes a 1/40 scale model train of single-car/three-car unit run thorough the swirling flow with the maximum speed of 4.3 m/s. The model car has 72 pressure ports on its surface to estimate the aerodynamic forces. The experimental results show that the aerodynamic forces vary its magnitude and direction depends on the location of the vehicle in the flow field. Second, the aerodynamic forces on the train were estimated by using Rankin vortex model. The Rankin vortex model is a simple tornado model which widely used in the field of civil engineering. The estimated aerodynamic forces on the middle car were fairly good agreement with the experimental results. Effects of the vortex core radius and the path of the train on the aerodynamic forces were investigated using the Rankin vortex model. The results shows that the side and lift forces increases as the vortex core radius increases, while the yawing moment is maximum when the core radius is 0.3875 times of the car length. Third, a computational simulation was conducted to clarify the flow field around the train. The simulated results qualitatively agreed with the experimental ones.

Keywords: aerodynamic force, experimental method, tornado, train

Procedia PDF Downloads 233
15664 Calibration Model of %Titratable Acidity (Citric Acid) for Intact Tomato by Transmittance SW-NIR Spectroscopy

Authors: K. Petcharaporn, S. Kumchoo

Abstract:

The acidity (citric acid) is one of the chemical contents that can refer to the internal quality and the maturity index of tomato. The titratable acidity (%TA) can be predicted by a non-destructive method prediction by using the transmittance short wavelength (SW-NIR). Spectroscopy in the wavelength range between 665-955 nm. The set of 167 tomato samples divided into groups of 117 tomatoes sample for training set and 50 tomatoes sample for test set were used to establish the calibration model to predict and measure %TA by partial least squares regression (PLSR) technique. The spectra were pretreated with MSC pretreatment and it gave the optimal result for calibration model as (R = 0.92, RMSEC = 0.03%) and this model obtained high accuracy result to use for %TA prediction in test set as (R = 0.81, RMSEP = 0.05%). From the result of prediction in test set shown that the transmittance SW-NIR spectroscopy technique can be used for a non-destructive method for %TA prediction of tomatoes.

Keywords: tomato, quality, prediction, transmittance, titratable acidity, citric acid

Procedia PDF Downloads 267
15663 Low Complexity Deblocking Algorithm

Authors: Jagroop Singh Sidhu, Buta Singh

Abstract:

A low computational deblocking filter including three frequency related modes (smooth mode, intermediate mode, and non-smooth mode for low-frequency, mid-frequency, and high frequency regions, respectively) is proposed. The suggested approach requires zero additions, zero subtractions, zero multiplications (for intermediate region), no divisions (for non-smooth region) and no comparison. The suggested method thus keeps the computation lower and thus suitable for image coding systems based on blocks. Comparison of average number of operations for smooth, non-smooth, intermediate (per pixel vector for each block) using filter suggested by Chen and the proposed method filter suggests that the proposed filter keeps the computation lower and is thus suitable for fast processing algorithms.

Keywords: blocking artifacts, computational complexity, non-smooth, intermediate, smooth

Procedia PDF Downloads 458
15662 Computational Fluid Dynamics Analysis of Convergent–Divergent Nozzle and Comparison against Theoretical and Experimental Results

Authors: Stewart A. Keir, Faik A. Hamad

Abstract:

This study aims to use both analytical and experimental methods of analysis to examine the accuracy of Computational Fluid Dynamics (CFD) models that can then be used for more complex analyses, accurately representing more elaborate flow phenomena such as internal shockwaves and boundary layers. The geometry used in the analytical study and CFD model is taken from the experimental rig. The analytical study is undertaken using isentropic and adiabatic relationships and the output of the analytical study, the 'shockwave location tool', is created. The results from the analytical study are then used to optimize the redesign an experimental rig for more favorable placement of pressure taps and gain a much better representation of the shockwaves occurring in the divergent section of the nozzle. The CFD model is then optimized through the selection of different parameters, e.g. turbulence models (Spalart-Almaras, Realizable k-epsilon & Standard k-omega) in order to develop an accurate, robust model. The results from the CFD model can then be directly compared to experimental and analytical results in order to gauge the accuracy of each method of analysis. The CFD model will be used to visualize the variation of various parameters such as velocity/Mach number, pressure and turbulence across the shock. The CFD results will be used to investigate the interaction between the shock wave and the boundary layer. The validated model can then be used to modify the nozzle designs which may offer better performance and ease of manufacture and may present feasible improvements to existing high-speed flow applications.

Keywords: CFD, nozzle, fluent, gas dynamics, shock-wave

Procedia PDF Downloads 229
15661 Colour Quick Response Code with High Damage Resistance Capability

Authors: Minh Nguyen

Abstract:

Today, QR or Quick Response Codes are prevalent, and mobile/smart devices can efficiently read and understand them. Therefore, we can see their appearance in many areas, such as storing web pages/websites, business phone numbers, redirecting to an app download, business location, social media. The popularity of the QR Code is mainly because of its many advantages, such as it can hold a good amount of information, is small, easy to scan and read by a general RGB camera, and it can still work with some damages on its surface. However, there are still some issues. For instance, some areas needed to be kept untouched for its successful decode (e.g., the “Finder Patterns,” the “Quiet Zone,” etc.), the capability of built-in auto-correction is not robust enough, and it is not flexible enough for many application such as Augment Reality (AR). We proposed a new Colour Quick Response Code that has several advantages over the original ones: (1) there is no untouchable area, (2) it allows up to 40% of the entire code area to be damaged, (3) it is more beneficial for Augmented Reality applications, and (4) it is back-compatible and readable by available QR Code scanners such as Pyzbar. From our experience, our Colour Quick Response Code is significantly more flexible on damage compared to the original QR Code. Our code is believed to be suitable in situations where standard 2D Barcodes fail to work, such as curved and shiny surfaces, for instance, medical blood test sample tubes and syringes.

Keywords: QR code, computer vision, image processing, 2D barcode

Procedia PDF Downloads 110
15660 Investigating a Deterrence Function for Work Trips for Perth Metropolitan Area

Authors: Ali Raouli, Amin Chegenizadeh, Hamid Nikraz

Abstract:

The Perth metropolitan area and its surrounding regions have been expanding rapidly in recent decades and it is expected that this growth will continue in the years to come. With this rapid growth and the resulting increase in population, consideration should be given to strategic planning and modelling for the future expansion of Perth. The accurate estimation of projected traffic volumes has always been a major concern for the transport modelers and planners. Development of a reliable strategic transport model depends significantly on the inputs data into the model and the calibrated parameters of the model to reflect the existing situation. Trip distribution is the second step in four-step modelling (FSM) which is complex due to its behavioral nature. Gravity model is the most common method for trip distribution. The spatial separation between the Origin and Destination (OD) zones will be reflected in gravity model by applying deterrence functions which provide an opportunity to include people’s behavior in choosing their destinations based on distance, time and cost of their journeys. Deterrence functions play an important role for distribution of the trips within a study area and would simulate the trip distances and therefore should be calibrated for any particular strategic transport model to correctly reflect the trip behavior within the modelling area. This paper aims to review the most common deterrence functions and propose a calibrated deterrence function for work trips within the Perth Metropolitan Area based on the information obtained from the latest available Household data and Perth and Region Travel Survey (PARTS) data. As part of this study, a four-step transport model using EMME software has been developed for Perth Metropolitan Area to assist with the analysis and findings.

Keywords: deterrence function, four-step modelling, origin destination, transport model

Procedia PDF Downloads 164
15659 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 194
15658 A Simple Finite Element Method for Glioma Tumor Growth Model with Density Dependent Diffusion

Authors: Shangerganesh Lingeshwaran

Abstract:

In this presentation, we have performed numerical simulations for a reaction-diffusion equation with various nonlinear density-dependent diffusion operators and proliferation functions. The mathematical model represented by parabolic partial differential equation is considered to study the invasion of gliomas (the most common type of brain tumors) and to describe the growth of cancer cells and response to their treatment. The unknown quantity of the given reaction-diffusion equation is the density of cancer cells and the mathematical model based on the proliferation and migration of glioma cells. A standard Galerkin finite element method is used to perform the numerical simulations of the given model. Finally, important observations on the each of nonlinear diffusion functions and proliferation functions are presented with the help of computational results.

Keywords: glioma invasion, nonlinear diffusion, reaction-diffusion, finite eleament method

Procedia PDF Downloads 225
15657 Improving the Employee Transfer Experience within an Organization

Authors: Drew Fockler

Abstract:

This research examines how to improve an employee’s experience when transferring between departments within an organization. This research includes a historical review of a Canadian retail organization. Based on this historical review, gaps are identified between current and future visions to show where problems with existing training and development practices need to be resolved to reduce front-line employee turnover within an organization. The strategies within this paper support leaders through the LEAD: Listen, Explore, Act and Develop, Change Management Model. The LEAD Change Management Model supports the change process. This research proposes three possible solutions to improve an employee who is transferring between departments. The best solution to resolve the problem of improving an employee moving between departments experience is creating a Training Manager position within the retail store. A Training Manager position could support both employees and leadership with training and development of staff who are moving between departments. Within this research, an implementation plan using the TransX Model was created. The TransX Model is a hybrid of Leader-Member Exchange Theory and Transformational Leadership Theory to facilitate this organizational change within an organization by creating a common vision. Finally, this research provides the next steps as well as future considerations to enhance the training manager role within an organization.

Keywords: employee transfers, employee engagement, human resources, employee induction, TransX model, lead change management model

Procedia PDF Downloads 74
15656 A Neural Approach for the Offline Recognition of the Arabic Handwritten Words of the Algerian Departments

Authors: Salim Ouchtati, Jean Sequeira, Mouldi Bedda

Abstract:

In this work we present an off line system for the recognition of the Arabic handwritten words of the Algerian departments. The study is based mainly on the evaluation of neural network performances, trained with the gradient back propagation algorithm. The used parameters to form the input vector of the neural network are extracted on the binary images of the handwritten word by several methods: the parameters of distribution, the moments centered of the different projections and the Barr features. It should be noted that these methods are applied on segments gotten after the division of the binary image of the word in six segments. The classification is achieved by a multi layers perceptron. Detailed experiments are carried and satisfactory recognition results are reported.

Keywords: handwritten word recognition, neural networks, image processing, pattern recognition, features extraction

Procedia PDF Downloads 509
15655 Numerical Modeling and Characteristic Analysis of a Parabolic Trough Solar Collector

Authors: Alibakhsh Kasaeian, Mohammad Sameti, Zahra Noori, Mona Rastgoo Bahambari

Abstract:

Nowadays, the parabolic trough solar collector technology has become the most promising large-scale technology among various solar thermal generations. In this paper, a detailed numerical heat transfer model for a parabolic trough collector with nanofluid is presented based on the finite difference approach for which a MATLAB code was developed. The model was used to simulate the performance of a parabolic trough solar collector’s linear receiver, called a heat collector element (HCE). In this model, the heat collector element of the receiver was discretized into several segments in axial directions and energy balances were used for each control volume. All the heat transfer correlations, the thermodynamic equations and the optical properties were considered in details and the set of algebraic equations were solved simultaneously using iterative numerical solutions. The modeling assumptions and limitations are also discussed, along with recommendations for model improvement.

Keywords: heat transfer, nanofluid, numerical analysis, trough

Procedia PDF Downloads 366
15654 Inventory Policy Above Country Level for Cooperating Countries for Vaccines

Authors: Aysun Pınarbaşı, Béla Vizvári

Abstract:

The countries are the units that procure the vaccines during the COVID-19 pandemic. The delivered quantities are huge. The countries must bear the inventory holding cost according to the variation of stock quantities. This cost depends on the speed of the vaccination in the country. This speed is time-dependent. The vaccinated portion of the population can be approximated by the cumulative distribution function of the Cauchy distribution. A model is provided for determining the minimal-cost inventory policy, and its optimality conditions are provided. The model is solved for 20 countries for different numbers of procurements. The results reveal the individual behavior of each country. We provide an inventory policy for the pandemic period for the countries. This paper presents a deterministic model for vaccines with a demand rate variable over time for the countries. It is aimed to provide an analytical model to deal with the minimization of holding cost and develop inventory policies regarding this aim to be used for a variety of perishable products such as vaccines. The saturation process is introduced, and an approximation of the vaccination curve of the countries has been discussed. According to this aspect, a deterministic model for inventory policy has been developed.

Keywords: covid-19, vaccination, inventory policy, bounded total demand, inventory holding cost, cauchy distribution, sigmoid function

Procedia PDF Downloads 70
15653 A Mathematical Investigation of the Turkevich Organizer Theory in the Citrate Method for the Synthesis of Gold Nanoparticles

Authors: Emmanuel Agunloye, Asterios Gavriilidis, Luca Mazzei

Abstract:

Gold nanoparticles are commonly synthesized by reducing chloroauric acid with sodium citrate. This method, referred to as the citrate method, can produce spherical gold nanoparticles (NPs) in the size range 10-150 nm. Gold NPs of this size are useful in many applications. However, the NPs are usually polydisperse and irreproducible. A better understanding of the synthesis mechanisms is thus required. This work thoroughly investigated the only model that describes the synthesis. This model combines mass and population balance equations, describing the NPs synthesis through a sequence of chemical reactions. Chloroauric acid reacts with sodium citrate to form aurous chloride and dicarboxy acetone. The latter organizes aurous chloride in a nucleation step and concurrently degrades into acetone. The unconsumed precursor then grows the formed nuclei. However, depending on the pH, both the precursor and the reducing agent react differently thus affecting the synthesis. In this work, we investigated the model for different conditions of pH, temperature and initial reactant concentrations. To solve the model, we used Parsival, a commercial numerical code, whilst to test it, we considered various conditions studied experimentally by different researchers, for which results are available in the literature. The model poorly predicted the experimental data. We believe that this is because the model does not account for the acid-base properties of both chloroauric acid and sodium citrate.

Keywords: citrate method, gold nanoparticles, Parsival, population balance equations, Turkevich organizer theory

Procedia PDF Downloads 195
15652 Fecundity and Egg Laying in Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae): Model Development and Field Validation

Authors: Muhammad Noor Ul Ane, Dong-Soon Kim, Myron P. Zalucki

Abstract:

Models can be useful to help understand population dynamics of insects under diverse environmental conditions and in developing strategies to manage pest species better. Adult longevity and fecundity of Helicoverpa armigera (Hübner) were evaluated against a wide range of constant temperatures (15, 20, 25, 30, 35 and 37.5ᵒC). The modified Sharpe and DeMichele model described adult aging rate and was used to estimate adult physiological age. Maximum fecundity of H. armigera was 973 egg/female at 25ᵒC decreasing to 72 eggs/female at 37.5ᵒC. The relationship between adult fecundity and temperature was well described by an extreme value function. Age-specific cumulative oviposition rate and age-specific survival rate were well described by a two-parameter Weibull function and sigmoid function, respectively. An oviposition model was developed using three temperature-dependent components: total fecundity, age-specific oviposition rate, and age-specific survival rate. The oviposition model was validated against independent field data and described the field occurrence pattern of egg population of H. armigera very well. Our model should be a useful component for population modeling of H. armigera and can be independently used for the timing of sprays in management programs of this key pest species.

Keywords: cotton bollworm, life table, temperature-dependent adult development, temperature-dependent fecundity

Procedia PDF Downloads 147
15651 Use of Fractal Geometry in Machine Learning

Authors: Fuad M. Alkoot

Abstract:

The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy.

Keywords: fractal geometry, machine learning, classifier, fractal dimension

Procedia PDF Downloads 208
15650 Biodiversity and Climate Change: Consequences for Norway Spruce Mountain Forests in Slovakia

Authors: Jozef Mindas, Jaroslav Skvarenina, Jana Skvareninova

Abstract:

Study of the effects of climate change on Norway Spruce (Picea abies) forests has mainly focused on the diversity of tree species diversity of tree species as a result of the ability of species to tolerate temperature and moisture changes as well as some effects of disturbance regime changes. The tree species’ diversity changes in spruce forests due to climate change have been analyzed via gap model. Forest gap model is a dynamic model for calculation basic characteristics of individual forest trees. Input ecological data for model calculations have been taken from the permanent research plots located in primeval forests in mountainous regions in Slovakia. The results of regional scenarios of the climatic change for the territory of Slovakia have been used, from which the values are according to the CGCM3.1 (global) model, KNMI and MPI (regional) models. Model results for conditions of the climate change scenarios suggest a shift of the upper forest limit to the region of the present subalpine zone, in supramontane zone. N. spruce representation will decrease at the expense of beech and precious broadleaved species (Acer sp., Sorbus sp., Fraxinus sp.). The most significant tree species diversity changes have been identified for the upper tree line and current belt of dwarf pine (Pinus mugo) occurrence. The results have been also discussed in relation to most important disturbances (wind storms, snow and ice storms) and phenological changes which consequences are little known. Special discussion is focused on biomass production changes in relation to carbon storage diversity in different carbon pools.

Keywords: biodiversity, climate change, Norway spruce forests, gap model

Procedia PDF Downloads 282
15649 A Bi-Objective Model to Optimize the Total Time and Idle Probability for Facility Location Problem Behaving as M/M/1/K Queues

Authors: Amirhossein Chambari

Abstract:

This article proposes a bi-objective model for the facility location problem subject to congestion (overcrowding). Motivated by implementations to locate servers in internet mirror sites, communication networks, one-server-systems, so on. This model consider for situations in which immobile (or fixed) service facilities are congested (or queued) by stochastic demand to behave as M/M/1/K queues. We consider for this problem two simultaneous perspectives; (1) Customers (desire to limit times of accessing and waiting for service) and (2) Service provider (desire to limit average facility idle-time). A bi-objective model is setup for facility location problem with two objective functions; (1) Minimizing sum of expected total traveling and waiting time (customers) and (2) Minimizing the average facility idle-time percentage (service provider). The proposed model belongs to the class of mixed-integer nonlinear programming models and the class of NP-hard problems. In addition, to solve the model, controlled elitist non-dominated sorting genetic algorithms (Controlled NSGA-II) and controlled elitist non-dominated ranking genetic algorithms (NRGA-I) are proposed. Furthermore, the two proposed metaheuristics algorithms are evaluated by establishing standard multiobjective metrics. Finally, the results are analyzed and some conclusions are given.

Keywords: bi-objective, facility location, queueing, controlled NSGA-II, NRGA-I

Procedia PDF Downloads 579
15648 A System Dynamics Approach to Exploring Personality Traits in Young Children

Authors: Misagh Faezipour

Abstract:

System dynamics is a systems engineering approach that can help address the complex challenges in different systems. Little is known about how the brain represents people to predict behavior. This work is based on how the brain simulates different personal behavior and responds to them in the case of young children ages one to five. As we know, children’s minds/brains are just as clean as a crystal, and throughout time, in their surroundings, families, and education center, they grow to develop and have different kinds of behavior towards the world and the society they live in. Hence, this work aims to identify how young children respond to various personality behavior and observes their reactions towards them from a system dynamics perspective. We will be exploring the Big Five personality traits in young children. A causal model is developed in support of the system dynamics approach. These models graphically present the factors and factor relationships that contribute to the big five personality traits and provide a better understanding of the entire behavior model. A simulator will be developed that includes a set of causal model factors and factor relationships. The simulator models the behavior of different factors related to personality traits and their impacts and can help make more informed decisions in a risk-free environment.

Keywords: personality traits, systems engineering, system dynamics, causal model, behavior model

Procedia PDF Downloads 90
15647 Zebrafish Larvae Model: A High Throughput Screening Tool to Study Autism

Authors: Shubham Dwivedi, Raghavender Medishetti, Rita Rani, Aarti Sevilimedu, Pushkar Kulkarni, Yogeeswari Perumal

Abstract:

Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder of early onset, characterized by impaired sociability, cognitive function and stereotypies. There is a significant urge to develop and establish new animal models with ASD-like characteristics for better understanding of underlying mechanisms. The aim of the present study was to develop a cost and time effective zebrafish model with quantifiable parameters to facilitate mechanistic studies as well as high-throughput screening of new molecules for autism. Zebrafish embryos were treated with valproic acid and a battery of behavioral tests (anxiety, inattentive behavior, irritability and social impairment) was performed on larvae at 7th day post fertilization, followed by study of molecular markers of autism. This model shows a significant behavioural impairment in valproic acid treated larvae in comparison to control which was again supported by alteration in few marker genes and proteins of autism. The model also shows a rescue of behavioural despair with positive control drugs. The model shows robust parameters to study behavior, molecular mechanism and drug screening approach in a single frame. Thus we postulate that our 7 days zebrafish larval model for autism can help in high throughput screening of new molecules on autism.

Keywords: autism, zebrafish, valproic acid, neurodevelopment, behavioral assay

Procedia PDF Downloads 158
15646 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia

Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa

Abstract:

Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.

Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling

Procedia PDF Downloads 272
15645 A Framework for SQL Learning: Linking Learning Taxonomy, Cognitive Model and Cross Cutting Factors

Authors: Huda Al Shuaily, Karen Renaud

Abstract:

Databases comprise the foundation of most software systems. System developers inevitably write code to query these databases. The de facto language for querying is SQL and this, consequently, is the default language taught by higher education institutions. There is evidence that learners find it hard to master SQL, harder than mastering other programming languages such as Java. Educators do not agree about explanations for this seeming anomaly. Further investigation may well reveal the reasons. In this paper, we report on our investigations into how novices learn SQL, the actual problems they experience when writing SQL, as well as the differences between expert and novice SQL query writers. We conclude by presenting a model of SQL learning that should inform the instructional material design process better to support the SQL learning process.

Keywords: pattern, SQL, learning, model

Procedia PDF Downloads 253
15644 Ecological Systems Theory, the SCERTS Model, and the Autism Spectrum, Node and Nexus

Authors: C. Surmei

Abstract:

Autism Spectrum Disorder (ASD) is a complex developmental disorder that can affect an individual’s (but is not limited to) cognitive development, emotional development, language acquisition and the capability to relate to others. Ecological Systems Theory is a sociocultural theory that focuses on environmental systems with which an individual interacts. The SCERTS Model is an educational approach and multidisciplinary framework that addresses the challenges confronted by individuals on the autism spectrum and other developmental disabilities. To aid the understanding of ASD and educational philosophies for families, educators, and the global community alike, a Comparative Analysis was undertaken to examine key variables (the child, society, education, nurture/care, relationships, communication). The results indicated that the Ecological Systems Theory and the SCERTS Model were comparable in focus, motivation, and application, attaining to a viable and notable relationship between both theories. This paper unpacks two child development philosophies and their relationship to each other.

Keywords: autism spectrum disorder, ecological systems theory, education, SCERTS model

Procedia PDF Downloads 576
15643 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore

Authors: Qiao-Yu Warren Cai

Abstract:

Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.

Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans

Procedia PDF Downloads 101
15642 Predicting Durability of Self Compacting Concrete Using Artificial Neural Network

Authors: R. Boudjelthia

Abstract:

The aim of this study is to determine the influence of mix composition of concrete as the content of water and cement, water–binder ratio, and the replacement of fly ash on the durability of self compacting concrete (SCC) by using artificial neural networks (ANNs). To achieve this, an ANNs model is developed to predict the durability of self compacting concrete which is expressed in terms of chloride ions permeability in accordance with ASTM C1202-97 or AASHTO T277. Database gathered from the literature for the training and testing the model. A sensitivity analysis was also conducted using the trained and tested ANN model to investigate the effect of fly ash on the durability of SCC. The results indicate that the developed model is reliable and accurate. the durability of SCC expressed in terms of total charge passed over a 6-h period can be significantly improved by using at least 25% fly ash as replacement of cement. This study show that artificial neural network have strong potentialas a feasible tool for predicting accurately the durability of SCC containing fly ash.

Keywords: artificial neural networks, durability, chloride ions permeability, self compacting concrete

Procedia PDF Downloads 375
15641 Synthesis and Electromagnetic Property of Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄ Grafted with Polyaniline Fibers

Authors: Jintang Zhou, Zhengjun Yao, Tiantian Yao

Abstract:

Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄(LZFO) grafted with polyaniline (PANI) fibers was synthesized by in situ polymerization. FTIR, XRD, SEM, and vector network analyzer were used to investigate chemical composition, micro-morphology, electromagnetic properties and microwave absorbing properties of the composite. The results show that PANI fibers were grafted on the surfaces of LZFO particles. The reflection loss exceeds 10 dB in the frequency range from 2.5 to 5 GHz and from 15 to 17GHz, and the maximum reflection loss reaches -33 dB at 15.9GHz. The enhanced microwave absorption properties of LZFO/PANI-fiber composites are mainly ascribed to the combined effect of both dielectric loss and magnetic loss and the improved impedance matching.

Keywords: Li₀.₃₅Zn₀.₃Fe₂.₃₅O₄, polyaniline, electromagnetic properties, microwave absorbing properties

Procedia PDF Downloads 427
15640 Multi-Criteria Goal Programming Model for Sustainable Development of India

Authors: Irfan Ali, Srikant Gupta, Aquil Ahmed

Abstract:

Every country needs a sustainable development (SD) for its economic growth by forming suitable policies and initiative programs for the development of different sectors of the country. This paper is comprised of modeling and optimization of different sectors of India that form a multi-criterion model. In this paper, we developed a fractional goal programming (FGP) model that helps in providing the efficient allocation of resources simultaneously by achieving the sustainable goals in gross domestic product (GDP), electricity consumption (EC) and greenhouse gasses (GHG) emission by the year 2030. Also, a weighted model of FGP is presented to obtain varying solution according to the priorities set by the policy maker for achieving future goals of GDP growth, EC, and GHG emission. The presented models provide a useful insight to the decision makers for implementing strategies in a different sector.

Keywords: sustainable and economic development, multi-objective fractional programming, fuzzy goal programming, weighted fuzzy goal programming

Procedia PDF Downloads 220
15639 Energy Analysis of Seasonal Air Conditioning Demand of All Income Classes Using Bottom up Model in Pakistan

Authors: Saba Arif, Anam Nadeem, Roman Kalvin, Tanzeel Rashid, Burhan Ali, Juntakan Taweekun

Abstract:

Currently, the energy crisis is taking serious attention. Globally, industries and building are major share takers of energy. 72% of total global energy is consumed by residential houses, markets, and commercial building. Additionally, in appliances air conditioners are major consumer of electricity; about 60% energy is used for cooling purpose in houses due to HVAC units. Energy demand will aid in determining what changes will be needed whether it is the estimation of the required energy for households or instituting conservation measures. Bottom-up model is one of the most famous methods for forecasting. In current research bottom-up model of air conditioners' energy consumption in all income classes in comparison with seasonal variation and hourly consumption is calculated. By comparison of energy consumption of all income classes by usage of air conditioners, total consumption of actual demand and current availability can be seen.

Keywords: air conditioning, bottom up model, income classes, energy demand

Procedia PDF Downloads 241
15638 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification

Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong

Abstract:

It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.

Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization

Procedia PDF Downloads 80