Search results for: split window algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4346

Search results for: split window algorithm

2456 Canadian High School Students' Attitudes and Perspectives Towards People with Disabilities, Autism and Attention Deficit Hyperactivity Disorder (ADHD)

Authors: Khodi Morgan, Kasey Crowe, Amanda Morgan

Abstract:

Canadian High School Students' Attitudes & Objective: To survey Canadian high school students regarding their attitudes and perspectives towards people with disabilities and explore how age, gender, and personal experience with a disability may impact these views. Methods: A survey was developed using the standardized Attitude Toward Persons With Disability Scale as its base, with the addition of questions specifically about Autism and Attention Deficit Hyperactivity Disorder (ADHD). The survey also gathered information about the participant’s age and gender and whether or not they, or a close family member, had any disabilities. Participants were recruited at a public Canadian high school by fellow student researchers. Results: A total of 219 (N=219) students ranging from 13 - 19 years old participated in the study (m= 15.9 years of age). Gender was equally split, with 44% male, 42% female and 14% undeclared. Experience with a disability was common amongst participants, with 25% self-identifying as having a personal disability and 48% claiming to have a close family member with a disability. Exploratory trends indicated that females, people with self-identified disabilities, and people with close family members with disabilities trended towards having more positive attitudes toward persons with disabilities.

Keywords: disability, autism, ADHD, high school, adolescence, community research, acceptance

Procedia PDF Downloads 76
2455 Effects of Deficit Watering and Potassium Fertigation on Growth and Yield Response of Cassava

Authors: Daniel O. Wasonga, Jouko Kleemola, Laura Alakukku, Pirjo Makela

Abstract:

Cassava (Manihot esculenta Crantz) is a major food crop for millions of people in the tropics. Growth and yield of cassava in the arid-tropics are seriously constrained by intermittent water deficit and low soil K content. Therefore, experiments were conducted to investigate the effects of interaction between water deficit and K fertigation on growth and yield response of biofortified cassava at early growth phase. Yellow cassava cultivar was grown under controlled glasshouse conditions in 5-L pots containing 1.7 kg of pre-fertilized potting mix. Plants were watered daily for 30 days after planting. Treatments were three watering levels (30%, severe water deficit; 60%, mild water deficit; 100%, well-watered), on which K (0.01, 1, 4, 16 and 32 mM) was split. Plants were harvested at 90 days after planting. Leaf area was smallest in plants grown with 30% watering and 0.01 mM K, and largest in plants grown with 100% watering and 32 mM K. Leaf, root, and total dry mass decreased in water-stressed plants. However, dry mass was markedly higher when plants were grown with 16 mM K under all watering levels in comparison to other K concentrations. The highest leaf, root and total dry mass were in plants with 100% watering and 16 mM K. In conclusion, K improved the growth of plants under water deficit and thus, K application on soils with low moisture and low K may improve the productivity of cassava.

Keywords: dry mass, interaction, leaf area, Manihot esculenta

Procedia PDF Downloads 114
2454 Uncovering the Role of Crystal Phase in Determining Nonvolatile Flash Memory Device Performance Based on 2D Van Der Waals Heterostructures

Authors: Yunpeng Xia, Jiajia Zha, Haoxin Huang, Hau Ping Chan, Chaoliang Tan

Abstract:

Although the crystal phase of two-dimensional (2D) transition metal dichalcogenides (TMDs) has been proven to play an essential role in fabricating high-performance electronic devices in the past decade, its effect on the performance of 2D material-based flash memory devices still remains unclear. Here, we report the exploration of the effect of MoTe₂ in different phases as the charge trapping layer on the performance of 2D van der Waals (vdW) heterostructure-based flash memory devices, where the metallic 1T′-MoTe₂ or semiconducting 2H-MoTe₂ nanoflake is used as the floating gate. By conducting comprehensive measurements on the two kinds of vdW heterostructure-based devices, the memory device based on MoS2/h-BN/1T′-MoTe₂ presents much better performance, including a larger memory window, faster switching speed (100 ns) and higher extinction ratio (107), than that of the device based on MoS₂/h-BN/2H-MoTe₂ heterostructure. Moreover, the device based on MoS₂/h-BN/1T′-MoTe₂ heterostructure also shows a long cycle (>1200 cycles) and retention (>3000 s) stability. Our study clearly demonstrates that the crystal phase of 2D TMDs has a significant impact on the performance of nonvolatile flash memory devices based on 2D vdW heterostructures, which paves the way for the fabrication of future high-performance memory devices based on 2D materials.

Keywords: crystal Phase, 2D van der Waals heretostructure, flash memory device, floating gate

Procedia PDF Downloads 49
2453 Supercomputer Simulation of Magnetic Multilayers Films

Authors: Vitalii Yu. Kapitan, Aleksandr V. Perzhu, Konstantin V. Nefedev

Abstract:

The necessity of studying magnetic multilayer structures is explained by the prospects of their practical application as a technological base for creating new storages medium. Magnetic multilayer films have many unique features that contribute to increasing the density of information recording and the speed of storage devices. Multilayer structures are structures of alternating magnetic and nonmagnetic layers. In frame of the classical Heisenberg model, lattice spin systems with direct short- and long-range exchange interactions were investigated by Monte Carlo methods. The thermodynamic characteristics of multilayer structures, such as the temperature behavior of magnetization, energy, and heat capacity, were investigated. The processes of magnetization reversal of multilayer structures in external magnetic fields were investigated. The developed software is based on the new, promising programming language Rust. Rust is a new experimental programming language developed by Mozilla. The language is positioned as an alternative to C and C++. For the Monte Carlo simulation, the Metropolis algorithm and its parallel implementation using MPI and the Wang-Landau algorithm were used. We are planning to study of magnetic multilayer films with asymmetric Dzyaloshinskii–Moriya (DM) interaction, interfacing effects and skyrmions textures. This work was supported by the state task of the Ministry of Education and Science of the Russia # 3.7383.2017/8.9

Keywords: The Monte Carlo methods, Heisenberg model, multilayer structures, magnetic skyrmion

Procedia PDF Downloads 163
2452 Effect of Aerobics Exercise on the Patient with Anxiety Disorder

Authors: Ahmed A. Abd El Rahim, Andrew Anis Fakhrey Mosaad

Abstract:

Background: An important psychological issue that has an impact on both mental and physical function is anxiety disorders. The general consensus is that aerobic exercise and physical activity are good for lowering anxiety and mood. Purpose: This study's goal was to look into how patients with anxiety disorders responded to aerobic exercise. Subjects: Anxiety disorders were identified in 30 individuals from the psychiatric hospital at Sohag University who were chosen based on inclusive criteria and had ages ranging from 25 to 45. Methods: Patients were split into two equal groups at random: For four weeks, three sessions per week, fifteen patients in group A (the study group), seven men and eight women, underwent medication therapy and aerobic exercise. Age (28.4 ± 2.11 years), weight (72.5 ± 10.06 kg), height (164.8 ± 9.64 cm), and BMI (26.65 ± 2.68 kg/m2) were all mean SD values. And in Group B (Control Group), only medication therapy was administered to 15 patients (9 males and 6 females). Age (29.6 ± 3.68), weight (75 ± 7.07 kg), height (166.9 ± 6.75) cm, and BMI (26.87 ± 1.11) kg/m2 were the mean SD values. Before and after the treatment, the Hamilton Anxiety Scale was used to gauge the patient's degree of anxiety. Results: Within the two groups, there were significant differences both before and after the treatment. Following therapy, there was a significant difference between the two groups; the study group displayed better improvement on the Hamilton Anxiety Scale. Conclusion: Patients with anxiety problems can benefit from aerobic activities and antianxiety drugs as effective treatments for lowering anxiety levels.

Keywords: aerobic exercises, anxiety disorders, antianxiety medications, Hamilton anxiety scale

Procedia PDF Downloads 82
2451 Nine-Level Shunt Active Power Filter Associated with a Photovoltaic Array Coupled to the Electrical Distribution Network

Authors: Zahzouh Zoubir, Bouzaouit Azzeddine, Gahgah Mounir

Abstract:

The use of more and more electronic power switches with a nonlinear behavior generates non-sinusoidal currents in distribution networks, which causes damage to domestic and industrial equipment. The multi-level shunt power active filter is subsequently shown to be an adequate solution to the problem raised. Nevertheless, the difficulty of adjusting the active filter DC supply voltage requires another technology to ensure it. In this article, a photovoltaic generator is associated with the DC bus power terminals of the active filter. The proposed system consists of a field of solar panels, three multi-level voltage inverters connected to the power grid and a non-linear load consisting of a six-diode rectifier bridge supplying a resistive-inductive load. Current control techniques of active and reactive power are used to compensate for both harmonic currents and reactive power as well as to inject active solar power into the distribution network. An algorithm of the search method of the maximum power point of type Perturb and observe is applied. Simulation results of the system proposed under the MATLAB/Simulink environment shows that the performance of control commands that reassure the solar power injection in the network, harmonic current compensation and power factor correction.

Keywords: Actif power filter, MPPT, pertub&observe algorithm, PV array, PWM-control

Procedia PDF Downloads 335
2450 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique

Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian

Abstract:

Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.

Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction

Procedia PDF Downloads 77
2449 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 339
2448 Vehicular Speed Detection Camera System Using Video Stream

Authors: C. A. Anser Pasha

Abstract:

In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene.

Keywords: radar, image processing, detection, tracking, segmentation

Procedia PDF Downloads 463
2447 Electrolyte Loaded Hexagonal Boron Nitride/Polyacrylonitrile Nanofibers for Lithium Ion Battery Application

Authors: Umran Kurtan, Hamide Aydin, Sevim Unugur Celik, Ayhan Bozkurt

Abstract:

In the present work, novel hBN/polyacrylonitrile composite nanofibers were produced via electrospinning approach and loaded with the electrolyte for rechargeable lithium-ion battery applications. The electrospun nanofibers comprising various hBN contents were characterized by using Fourier transform infrared spectroscopy (FT-IR), thermogravimetric analysis (TGA), X-ray diffraction (XRD) and scanning electron microscopy (SEM) techniques. The influence of hBN/PAN ratios onto the properties of the porous composite system, such as fiber diameter, porosity, and the liquid electrolyte uptake capability were systematically studied. Ionic conductivities and electrochemical characterizations were evaluated after loading electrospun hBN/PAN composite nanofiber with liquid electrolyte, i.e., 1 M lithium hexafluorophosphate (LiPF6) in ethylene carbonate (EC)/ethyl methyl carbonate (EMC) (1:1 vol). The electrolyte loaded nanofiber has a highest ionic conductivity of 10−3 S cm⁻¹ at room temperature. According to cyclic voltammetry (CV) results it exhibited a high electrochemical stability window up to 4.7 V versus Li+/Li. Li//10 wt% hBN/PAN//LiCO₂ cell was produced which delivered high discharge capacity of 144 mAhg⁻¹ and capacity retention of 92.4%. Considering high safety and low cost properties of the resulting hBN/PAN fiber electrolytes, these materials can be suggested as potential separator materials for lithium-ion batteries.

Keywords: hexagonal boron nitride, polyacrylonitrile, electrospinning, lithium ion battery

Procedia PDF Downloads 141
2446 Increasing the Resilience of Cyber Physical Systems in Smart Grid Environments using Dynamic Cells

Authors: Andrea Tundis, Carlos García Cordero, Rolf Egert, Alfredo Garro, Max Mühlhäuser

Abstract:

Resilience is an important system property that relies on the ability of a system to automatically recover from a degraded state so as to continue providing its services. Resilient systems have the means of detecting faults and failures with the added capability of automatically restoring their normal operations. Mastering resilience in the domain of Cyber-Physical Systems is challenging due to the interdependence of hybrid hardware and software components, along with physical limitations, laws, regulations and standards, among others. In order to overcome these challenges, this paper presents a modeling approach, based on the concept of Dynamic Cells, tailored to the management of Smart Grids. Additionally, a heuristic algorithm that works on top of the proposed modeling approach, to find resilient configurations, has been defined and implemented. More specifically, the model supports a flexible representation of Smart Grids and the algorithm is able to manage, at different abstraction levels, the resource consumption of individual grid elements on the presence of failures and faults. Finally, the proposal is evaluated in a test scenario where the effectiveness of such approach, when dealing with complex scenarios where adequate solutions are difficult to find, is shown.

Keywords: cyber-physical systems, energy management, optimization, smart grids, self-healing, resilience, security

Procedia PDF Downloads 323
2445 Loan Supply and Asset Price Volatility: An Experimental Study

Authors: Gabriele Iannotta

Abstract:

This paper investigates credit cycles by means of an experiment based on a Kiyotaki & Moore (1997) model with heterogeneous expectations. The aim is to examine how a credit squeeze caused by high lender-level risk perceptions affects the real prices of a collateralised asset, with a special focus on the macroeconomic implications of rising price volatility in terms of total welfare and the number of bankruptcies that occur. To do that, a learning-to-forecast experiment (LtFE) has been run where participants are asked to predict the future price of land and then rewarded based on the accuracy of their forecasts. The setting includes one lender and five borrowers in each of the twelve sessions split between six control groups (G1) and six treatment groups (G2). The only difference is that while in G1 the lender always satisfies borrowers’ loan demand (bankruptcies permitting), in G2 he/she closes the entire credit market in case three or more bankruptcies occur in the previous round. Experimental results show that negative risk-driven supply shocks amplify the volatility of collateral prices. This uncertainty worsens the agents’ ability to predict the future value of land and, as a consequence, the number of defaults increases and the total welfare deteriorates.

Keywords: Behavioural Macroeconomics, Credit Cycle, Experimental Economics, Heterogeneous Expectations, Learning-to-Forecast Experiment

Procedia PDF Downloads 121
2444 Machine Learning Approach for Automating Electronic Component Error Classification and Detection

Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski

Abstract:

The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.

Keywords: augmented reality, machine learning, object recognition, virtual laboratories

Procedia PDF Downloads 132
2443 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features

Authors: Rabab M. Ramadan, Elaraby A. Elgallad

Abstract:

With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.

Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)

Procedia PDF Downloads 231
2442 A Hybrid Algorithm Based on Greedy Randomized Adaptive Search Procedure and Chemical Reaction Optimization for the Vehicle Routing Problem with Hard Time Windows

Authors: Imen Boudali, Marwa Ragmoun

Abstract:

The Vehicle Routing Problem with Hard Time Windows (VRPHTW) is a basic distribution management problem that models many real-world problems. The objective of the problem is to deliver a set of customers with known demands on minimum-cost vehicle routes while satisfying vehicle capacity and hard time windows for customers. In this paper, we propose to deal with our optimization problem by using a new hybrid stochastic algorithm based on two metaheuristics: Chemical Reaction Optimization (CRO) and Greedy Randomized Adaptive Search Procedure (GRASP). The first method is inspired by the natural process of chemical reactions enabling the transformation of unstable substances with excessive energy to stable ones. During this process, the molecules interact with each other through a series of elementary reactions to reach minimum energy for their existence. This property is embedded in CRO to solve the VRPHTW. In order to enhance the population diversity throughout the search process, we integrated the GRASP in our method. Simulation results on the base of Solomon’s benchmark instances show the very satisfactory performances of the proposed approach.

Keywords: Benchmark Problems, Combinatorial Optimization, Vehicle Routing Problem with Hard Time Windows, Meta-heuristics, Hybridization, GRASP, CRO

Procedia PDF Downloads 409
2441 Effect of Weed Control and Different Plant Densities the Yield and Quality of Safflower (Carthamus tinctorius L.)

Authors: Hasan Dalgic, Fikret Akinerdem

Abstract:

This trial was made to determine effect of different plant density and weed control on yield and quality of winter sowing safflower (Carthamus tinctorius L.) in Selcuk University, Agricultural Faculty trial fields and the effective substance of Trifluran was used as herbicide. Field trial was made during the vegetation period of 2009-2010 with three replications according to 'Split Plots in Randomized Blocks' design. The weed control techniques were made on main plots and row distances was set up on sub-plots. The trial subjects were consisting from three weed control techniques as fallowing: herbicide application (Trifluran), hoeing and control beside the row distances of 15 cm and 30 cm. The results were ranged between 59.0-76.73 cm in plant height, 40.00-47.07 cm in first branch height, 5.00-7.20 in number of branch per plant, 6.00-14.73 number of head per plant, 19.57-21.87 mm in head diameter, 2125.0-3968.3 kg ha-1 in seed yield, 27.10-28.08 % in crude oil rate and 531.7-1070.3 kg ha-1. According to the results, Remzibey safflower cultivar showed the highest seed yield on 30 cm of row distance and herbicide application by means of the direct effects of plant height, first branch height, number of branch per plant, number of head per plant, table diameter, crude oil rate and crude oil yield.

Keywords: safflower, herbicide, row spacing, seed yield, oil ratio, oil yield

Procedia PDF Downloads 330
2440 Seven Years Assessment on the Suitability of Cocoa Clones Cultivation in High-Density Planting and Its Management in Malaysia

Authors: O. Rozita, N. M. Nik Aziz

Abstract:

High-density planting is usually recommended for a small area of planting in order to increase production. The normal planting distance for cocoa (Theobroma cacao L.) in Malaysia is 3 m x 3 m. The study was conducted at Cocoa Research and Development Centre, Malaysia Cocoa Board, Jengka, Pahang with the objectives to evaluate the suitability of seven cocoa clones under four different planting densities and to study the interaction between cocoa clones and planting densities. The study was arranged in the split plot with randomized complete block design and replicated three times. The cocoa clone was assigned as the main plot and planting density was assigned as a subplot. The clones used in this study were PBC 123, PBC 112, MCBC4, MCBC 5, QH 1003, QH 22, and BAL 244. The planting distance were 3 m x 3 m (1000 stands/ha), 3 m x 1.5 m (2000 stands/ha), 3 m x 1 m (3000 stands/ha) and (1.5 m x 1.5 m) x 3 m (3333 stands/ha). Evaluation on yield performance was carried out for seven years. Clones of PBC 123, QH 1003, and QH 22 obtained the higher yield, meanwhile MCBC 4, MCBC 5, and BAL 244 obtained the lowest yield. In general, high-density planting can increase cocoa production with good management practices. Among the cocoa management practices, the selection of suitable clones with small branching habits and moderately vigorous and proper pruning activity were the most important factor in high-density planting.

Keywords: clones, management, planting density, Theobroma cacao, yield

Procedia PDF Downloads 372
2439 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 77
2438 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 254
2437 A Convergent Interacting Particle Method for Computing Kpp Front Speeds in Random Flows

Authors: Tan Zhang, Zhongjian Wang, Jack Xin, Zhiwen Zhang

Abstract:

We aim to efficiently compute the spreading speeds of reaction-diffusion-advection (RDA) fronts in divergence-free random flows under the Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We study a stochastic interacting particle method (IPM) for the reduced principal eigenvalue (Lyapunov exponent) problem of an associated linear advection-diffusion operator with spatially random coefficients. The Fourier representation of the random advection field and the Feynman-Kac (FK) formula of the principal eigenvalue (Lyapunov exponent) form the foundation of our method implemented as a genetic evolution algorithm. The particles undergo advection-diffusion and mutation/selection through a fitness function originated in the FK semigroup. We analyze the convergence of the algorithm based on operator splitting and present numerical results on representative flows such as 2D cellular flow and 3D Arnold-Beltrami-Childress (ABC) flow under random perturbations. The 2D examples serve as a consistency check with semi-Lagrangian computation. The 3D results demonstrate that IPM, being mesh-free and self-adaptive, is simple to implement and efficient for computing front spreading speeds in the advection-dominated regime for high-dimensional random flows on unbounded domains where no truncation is needed.

Keywords: KPP front speeds, random flows, Feynman-Kac semigroups, interacting particle method, convergence analysis

Procedia PDF Downloads 45
2436 The Design of a Mixed Matrix Model for Activity Levels Extraction and Sub Processes Classification of a Work Project (Case: Great Tehran Electrical Distribution Company)

Authors: Elham Allahmoradi, Bahman Allahmoradi, Ali Bonyadi Naeini

Abstract:

Complex systems have many aspects. A variety of methods have been developed to analyze these systems. The most efficient of these methods should not only be simple, but also provide useful and comprehensive information about many aspects of the system. Matrix methods are considered the most commonly methods used to analyze and design systems. Each matrix method can examine a particular aspect of the system. If these methods are combined, managers can access to more comprehensive and broader information about the system. This study was conducted in four steps. In the first step, a process model of a real project has been extracted through IDEF3. In the second step, activity levels have been attained by writing a process model in the form of a design structure matrix (DSM) and sorting it through triangulation algorithm (TA). In the third step, sub-processes have been obtained by writing the process model in the form of an interface structure matrix (ISM) and clustering it through cluster identification algorithm (CIA). In the fourth step, a mixed model has been developed to provide a unified picture of the project structure through the simultaneous presentation of activities and sub-processes. Finally, the paper is completed with a conclusion.

Keywords: integrated definition for process description capture (IDEF3) method, design structure matrix (DSM), interface structure matrix (ism), mixed matrix model, activity level, sub-process

Procedia PDF Downloads 490
2435 Errors and Misconceptions for Students with Mathematical Learning Disabilities: Quest for Suitable Teaching Strategy

Authors: A. K. Tsafe

Abstract:

The study investigates the efficacy of Special Mathematics Teaching Strategy (SMTS) as against Conventional Mathematics Teaching Strategy (CMTS) in teaching students identified with Mathematics Learning Disabilities (MLDs) – dyslexia, Down syndrome, dyscalculia, etc., in some junior secondary schools around Sokoto metropolis. Errors and misconceptions in learning Mathematics displayed by these categories of students were observed. Theory of variation was used to provide a prism for viewing the MLDs from theoretical perspective. Experimental research design was used, involving pretest-posttest non-randomized approach. Pretest was administered to the intact class taught using CMTS before the class was split into experimental and control groups. Experimental group of the students – those identified with MLDs was taught with SMTS and later mean performance of students taught using the two strategies was sought to find if there was any significant difference between the performances of the students. A null hypothesis was tested at α = 0.05 level of significance. T-test was used to establish the difference between the mean performances of the two tests. The null hypothesis was rejected. Hence, the performance of students, identified with MLDs taught using SMTS was found to be better than their earlier performance taught using CMTS. The study, therefore, recommends amongst other things that teachers should be encouraged to use SMTS in teaching mathematics especially when students are found to be suffering from MLDs and exhibiting errors and misconceptions in the process of learning mathematics.

Keywords: disabilities, errors, learning, misconceptions

Procedia PDF Downloads 93
2434 Application of Shape Memory Alloy as Shear Connector in Composite Bridges: Overview of State-of-the-Art

Authors: Apurwa Rastogi, Anant Parghi

Abstract:

Shape memory alloys (SMAs) are memory metals with a high calibre to outperform as a civil construction material. They showcase novel functionality of undergoing large deformations and self-healing capability (pseudoelasticity) that leads to its emerging applications in a variety of areas. In the existing literature, most of the studies focused on the behaviour of SMA when used in critical regions of the smart buildings/bridges designed to withstand severe earthquakes without collapse and also its various applications in retrofitting works. However, despite having high ductility, their uses as construction joints and shear connectors in composite bridges are still unexplored in the research domain. This article presents to gain a broad outlook on whether SMAs can be partially used as shear connectors in composite bridges. In this regard, existing papers on the characteristics of shear connectors in the composite bridges will be discussed thoroughly and matched with the fundamental characteristics and properties of SMA. Since due to the high strength, stiffness, and ductility phenomena of SMAs, it is expected to be a good material for the shear connectors in composite bridges, and the collected evidence encourages the prior scrutiny of its partial use in the composite constructions. Based on the comprehensive review, important and necessary conclusions will be affirmed, and further emergence of research direction on the use of SMA will be discussed. This opens the window of new possibilities of using smart materials to enhance the performance of bridges even more in the near future.

Keywords: composite bridges, ductility, pseudoelasticity, shape memory alloy, shear connectors

Procedia PDF Downloads 186
2433 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams

Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina

Abstract:

It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.

Keywords: algorithm, radiology, tuberculosis, x-rays exam

Procedia PDF Downloads 416
2432 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.

Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm

Procedia PDF Downloads 309
2431 Baseline Study for Performance Evaluation of New Generation Solar Insulation Films for Windows: A Test Bed in Singapore

Authors: Priya Pawar, Rithika Susan Thomas, Emmanuel Blonkowski

Abstract:

Due to the solar geometry of Singapore, which lay within the geographical classification of equatorial tropics, there is a great deal of thermal energy transfer to the inside of the buildings. With changing face of economic development of cities like Singapore, more and more buildings are designed to be lightweight using transparent construction materials such as glass. Increased demand for energy efficiency and reduced cooling load demands make it important for building designer and operators to adopt new and non-invasive technologies to achieve building energy efficiency targets. A real time performance evaluation study was undertaken at School of Art Design and Media (SADM), Singapore, to determine the efficiency potential of a new generation solar insulation film. The building has a window to wall ratio (WWR) of 100% and is fitted with high performance (low emissivity) double glazed units. The empirical data collected was then used to calibrate a computerized simulation model to understand the annual energy consumption based on existing conditions (baseline performance). It was found that the correlations of various parameters such as solar irradiance, solar heat flux, and outdoor air-temperatures quantification are significantly important to determine the cooling load during a particular period of testing.

Keywords: solar insulation film, building energy efficiency, tropics, cooling load

Procedia PDF Downloads 189
2430 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 66
2429 Immunomodulation by Interleukin-10 Therapy in Mouse Airway Transplantation

Authors: Mohammaad Afzal Khan, Ghazi Abdulmalik Ashoor , Fatimah Alanazi, Talal Shamma, Abdullah Altuhami, Hala Abdalrahman Ahmed, Abdullah Mohammed Assiri, Dieter Clemens Broering

Abstract:

Microvascular injuries during inflammation are key causes of transplant malfunctioning and permanent failure, which play a major role in the development of chronic rejection of the transplanted organ. Inflammation-induced microvascular loss is a promising area to investigate the decisive roles of regulatory and effector responses. The present study was designed to investigate the impact of IL-10 on immunotolerance, in particular, the microenvironment of the allograft during rejection. Here, we investigated the effects of IL-10 blockade/ reconstitution and serially monitored regulatory T cells (Tregs), graft microvasculature, and airway epithelium in rejecting airway transplants. We demonstrated that the blocking/reconstitution of IL-10 significantly modulates CD4+FOXP3+ Tregs, microvasculature, and airway epithelium during rejection. Our findings further highlighted that blockade of IL-10 upregulated proinflammatory cytokines, IL-2, IL-1β, IFN-γ, IL-15, and IL-23, but suppressed IL-5 secretion during rejection; however, reconstitution of IL-10 significantly upregulated CD4+FOXP3+ Tregs, tissue oxygenation/blood flow and airway repair. Collectively, these findings demonstrate a potential reparative modulation of IL-10 during microvascular and epithelial repair, which could provide a vital therapeutic window to rejecting transplants in clinical practice.

Keywords: interleukin -10, regulatory T cells, allograft rejection, immunotolerance

Procedia PDF Downloads 171
2428 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 166
2427 Effect of High Volume processed Fly Ash on Engineering Properties of Concrete

Authors: Dhara Shah, Chandrakant Shah

Abstract:

As everyone knows, fly ash is a residual material we get upon energy production using coal. It has found numerous advantages for use in the concrete industry like improved workability, increased ultimate strength, reduced bleeding, reduced permeability, better finish and reduced heat of hydration. Types of fly ash depend on the type of coal and the coal combustion process. It is a pozzolanic material and has mainly two classes, F and C, based on the chemical composition. The fly ash used for this experimental work contains significant amount of lime and would be categorized as type F fly ash. Generally all types of fly ash have particle size less than 0.075mm. The fineness and lime content of fly ash are very important as they will affect the air content and water demand of the concrete, thereby affecting the durability and strength of the concrete. The present work has been done to optimize the use of fly ash to produce concrete with improved results and added benefits. A series of tests are carried out, analyzed and compared with concrete manufactured using only Portland cement as a binder. The present study is carried out for concrete mix with replacement of cement with different proportions of fly ash. Two concrete mixes M25 and M30 were studied with six replacements of cement with fly ash i.e. 40%, 45%, 50%, 55%, 60% and 65% for 7-day, 14-day, 28-day, 56-day and 90-day. Study focused on compressive strength, split tensile strength, modulus of elasticity and modulus of rupture of concrete. Study clearly revealed that cement replacement by any proportion of fly ash failed to achieve early strength. Replacement of 40% and 45% succeeded in achieving required flexural strength for M25 and M30 grade of concrete.

Keywords: processed fly ash, engineering properties of concrete, pozzolanic, lime content

Procedia PDF Downloads 331