Search results for: one side class algorithm
5361 Optimisation of Intermodal Transport Chain of Supermarkets on Isle of Wight, UK
Authors: Jingya Liu, Yue Wu, Jiabin Luo
Abstract:
This work investigates an intermodal transportation system for delivering goods from a Regional Distribution Centre to supermarkets on the Isle of Wight (IOW) via the port of Southampton or Portsmouth in the UK. We consider this integrated logistics chain as a 3-echelon transportation system. In such a system, there are two types of transport methods used to deliver goods across the Solent Channel: one is accompanied transport, which is used by most supermarkets on the IOW, such as Spar, Lidl and Co-operative food; the other is unaccompanied transport, which is used by Aldi. Five transport scenarios are studied based on different transport modes and ferry routes. The aim is to determine an optimal delivery plan for supermarkets of different business scales on IOW, in order to minimise the total running cost, fuel consumptions and carbon emissions. The problem is modelled as a vehicle routing problem with time windows and solved by genetic algorithm. The computing results suggested that accompanied transport is more cost efficient for small and medium business-scale supermarket chains on IOW, while unaccompanied transport has the potential to improve the efficiency and effectiveness of large business scale supermarket chains.Keywords: genetic algorithm, intermodal transport system, Isle of Wight, optimization, supermarket
Procedia PDF Downloads 3745360 Auto Rickshaw Impacts with Pedestrians: A Computational Analysis of Post-Collision Kinematics and Injury Mechanics
Authors: A. J. Al-Graitti, G. A. Khalid, P. Berthelson, A. Mason-Jones, R. Prabhu, M. D. Jones
Abstract:
Motor vehicle related pedestrian road traffic collisions are a major road safety challenge, since they are a leading cause of death and serious injury worldwide, contributing to a third of the global disease burden. The auto rickshaw, which is a common form of urban transport in many developing countries, plays a major transport role, both as a vehicle for hire and for private use. The most common auto rickshaws are quite unlike ‘typical’ four-wheel motor vehicle, being typically characterised by three wheels, a non-tilting sheet-metal body or open frame construction, a canvas roof and side curtains, a small drivers’ cabin, handlebar controls and a passenger space at the rear. Given the propensity, in developing countries, for auto rickshaws to be used in mixed cityscapes, where pedestrians and vehicles share the roadway, the potential for auto rickshaw impacts with pedestrians is relatively high. Whilst auto rickshaws are used in some Western countries, their limited number and spatial separation from pedestrian walkways, as a result of city planning, has not resulted in significant accident statistics. Thus, auto rickshaws have not been subject to the vehicle impact related pedestrian crash kinematic analyses and/or injury mechanics assessment, typically associated with motor vehicle development in Western Europe, North America and Japan. This study presents a parametric analysis of auto rickshaw related pedestrian impacts by computational simulation, using a Finite Element model of an auto rickshaw and an LS-DYNA 50th percentile male Hybrid III Anthropometric Test Device (dummy). Parametric variables include auto rickshaw impact velocity, auto rickshaw impact region (front, centre or offset) and relative pedestrian impact position (front, side and rear). The output data of each impact simulation was correlated against reported injury metrics, Head Injury Criterion (front, side and rear), Neck injury Criterion (front, side and rear), Abbreviated Injury Scale and reported risk level and adds greater understanding to the issue of auto rickshaw related pedestrian injury risk. The parametric analyses suggest that pedestrians are subject to a relatively high risk of injury during impacts with an auto rickshaw at velocities of 20 km/h or greater, which during some of the impact simulations may even risk fatalities. The present study provides valuable evidence for informing a series of recommendations and guidelines for making the auto rickshaw safer during collisions with pedestrians. Whilst it is acknowledged that the present research findings are based in the field of safety engineering and may over represent injury risk, compared to “Real World” accidents, many of the simulated interactions produced injury response values significantly greater than current threshold curves and thus, justify their inclusion in the study. To reduce the injury risk level and increase the safety of the auto rickshaw, there should be a reduction in the velocity of the auto rickshaw and, or, consideration of engineering solutions, such as retro fitting injury mitigation technologies to those auto rickshaw contact regions which are the subject of the greatest risk of producing pedestrian injury.Keywords: auto rickshaw, finite element analysis, injury risk level, LS-DYNA, pedestrian impact
Procedia PDF Downloads 1955359 2D Hexagonal Cellular Automata: The Complexity of Forms
Authors: Vural Erdogan
Abstract:
We created two-dimensional hexagonal cellular automata to obtain complexity by using simple rules same as Conway’s game of life. Considering the game of life rules, Wolfram's works about life-like structures and John von Neumann's self-replication, self-maintenance, self-reproduction problems, we developed 2-states and 3-states hexagonal growing algorithms that reach large populations through random initial states. Unlike the game of life, we used six neighbourhoods cellular automata instead of eight or four neighbourhoods. First simulations explained that whether we are able to obtain sort of oscillators, blinkers, and gliders. Inspired by Wolfram's 1D cellular automata complexity and life-like structures, we simulated 2D synchronous, discrete, deterministic cellular automata to reach life-like forms with 2-states cells. The life-like formations and the oscillators have been explained how they contribute to initiating self-maintenance together with self-reproduction and self-replication. After comparing simulation results, we decided to develop the algorithm for another step. Appending a new state to the same algorithm, which we used for reaching life-like structures, led us to experiment new branching and fractal forms. All these studies tried to demonstrate that complex life forms might come from uncomplicated rules.Keywords: hexagonal cellular automata, self-replication, self-reproduction, self- maintenance
Procedia PDF Downloads 1595358 A Critical Review on Temperature Affecting the Morpho-Physiological, Hormonal and Genetic Control of Branching in Chrysanthemum
Authors: S. Ahmad, C. Yuan, Q. Zhang
Abstract:
The assorted architectural plasticity of a plant is majorly specified by stooling, a phenomenon tackled by a combination of developmental, environmental and hormonal accelerators of lateral buds. Chrysanthemums (Chrysanthemum morifolium) are one of the most economically important ornamental plants worldwide on the account of having plentiful architectural patterns, diverse shapes and attractive colors. Side branching is the major determinant guaranteeing the consistent demand of cut chrysanthemum in flower industry. Presence of immense number of axillary branches devalues the economic importance of this imperative plant and is a major challenge for mum growers to hold a stake in the cut flower market. Restricting branches to a minimum level, or no branches at all, is the dire need of the day in order to introducing novelty in cut chrysanthemums. Temperature is a potent factor which affects largely the escalation, development of chrysanthemum, and also the genetic expression of various vegetative traits like branching. It affects differently the developmental characteristics and phenotypic expressions of inherent qualities, thereby playing a significant role in differentiating the developmental responses in different cultivars of chrysanthemum. A detailed study pertaining to the affect of temperature on branching in chrysanthemum is a clear lacking throughout the literature on mums. Therefore, searching with temperature as an effective means of reducing side branching to a desired level could be an influencing extension of struggles about how to nullify stooling. This requires plenty of research in order to reveal the extended penetration of temperature in manipulating the genetic control of various important traits like branching, which is a burning issue now a days in producing cut flowers in chrysanthemum. The present review will highlight the impact of temperature on branching control mechanism in chrysanthemum at morpho-physiological, hormonal and molecular levels.Keywords: branching, chrysanthemum, genetic control, hormonal, morpho-physiological, temperature
Procedia PDF Downloads 2855357 Framework for Detecting External Plagiarism from Monolingual Documents: Use of Shallow NLP and N-Gram Frequency Comparison
Authors: Saugata Bose, Ritambhra Korpal
Abstract:
The internet has increased the copy-paste scenarios amongst students as well as amongst researchers leading to different levels of plagiarized documents. For this reason, much of research is focused on for detecting plagiarism automatically. In this paper, an initiative is discussed where Natural Language Processing (NLP) techniques as well as supervised machine learning algorithms have been combined to detect plagiarized texts. Here, the major emphasis is on to construct a framework which detects external plagiarism from monolingual texts successfully. For successfully detecting the plagiarism, n-gram frequency comparison approach has been implemented to construct the model framework. The framework is based on 120 characteristics which have been extracted during pre-processing the documents using NLP approach. Afterwards, filter metrics has been applied to select most relevant characteristics and then supervised classification learning algorithm has been used to classify the documents in four levels of plagiarism. Confusion matrix was built to estimate the false positives and false negatives. Our plagiarism framework achieved a very high the accuracy score.Keywords: lexical matching, shallow NLP, supervised machine learning algorithm, word n-gram
Procedia PDF Downloads 3605356 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data
Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao
Abstract:
Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing
Procedia PDF Downloads 4425355 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers
Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice
Abstract:
In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection
Procedia PDF Downloads 4495354 Portfolio Assessment and English as a Foreign Language Aboriginal Students’ English Learning Outcome in Taiwan
Authors: Li-Ching Hung
Abstract:
The lack of empirical research on portfolio assessment in aboriginal EFL English classes of junior high schools in Taiwan may inhibit EFL teachers from appreciating the utility of this alternative assessment approach. This study addressed the following research questions: 1) understand how aboriginal EFL students and instructors of junior high schools in Taiwan perceive portfolio assessment, and 2) how portfolio assessment affects Taiwanese aboriginal EFL students’ learning outcomes. Ten classes of five junior high schools in Taiwan (from different regions of Taiwan) participated in this study. Two classes from each school joined the study, and each class was randomly assigned as a control group, and one was the experimental group. These five junior high schools consisted of at least 50% of aboriginal students. A mixed research design was utilized. The instructor of each class implemented a portfolio assessment for 15 weeks of the 2015 Fall Semester. At the beginning of the semester, all participants took a GEPT test (pretest), and in the 15th week, all participants took the same level of GEPT test (post-test). Scores of students’ GEPT tests were checked by the researcher as supplemental data in order to understand each student’s performance. In addition, each instructor was interviewed to provide qualitative data concerning students’ general learning performance and their perception of implementing portfolio assessments in their English classes. The results of this study were used to provide suggestions for EFL instructors while modifying their lesson plans regarding assessment. In addition, the empirical data were used as references for EFL instructors implementing portfolio assessments in their classes effectively.Keywords: assessment, portfolio assessment, qualitative design, aboriginal ESL students
Procedia PDF Downloads 1455353 Teacher Education: Teacher Development and Support
Authors: Khadem Hichem
Abstract:
With the new technology challenges, dynamics and challenges of the contemporary world, most teachers are struggling to maintain effective and successful teaching /learning environment for learners. Teachers as a key to the success of reforms in the educational setting, they must improve their competencies to teach effectively. Many researchers emphasis on the ongoing professional development of the teacher by enhancing their experiences and encouraging their responsibility for learning, and thus promoting self-reliance, collaboration, and reflection. In short, teachers are considered as learners and they need to learn together. The educational system must support, both conceptually and financially, the teachers’ development as lifelong learners Teachers need opportunities to grow in language proficiency and in knowledge. Changing nature of language and culture in the world, all teachers must have opportunities to update their knowledge and practices. Many researchers in the field of foreign or additional languages indicate that teachers keep side by side of effective instructional practices and they need special support with the challenging task of developing and administering proficiency tests to their students. For significant change to occur, each individual teacher’s needs must be addressed. The teacher must be involved experientially in the process of development, since, by itself, knowledge of how to change does not mean change will be initiated. For improvement to occur, new skills have to be guided, practiced, and reflected upon in collaboration with colleagues. Clearly, teachers are at different places developmentally; therefore, allowances for various entry levels and individual differences need to be built into the professional development structure. Objectives must be meaningful to the participant and teacher improvement must be stated terms of student knowledge, student performance, and motivation. The most successful professional development process acknowledges the student-centered nature of good teaching. This paper highlights the importance of teacher professional development process and institutional supports as way to enhance good teaching and learning environment.Keywords: teacher professional development, teacher competencies, institutional support, teacher education
Procedia PDF Downloads 3585352 Application of Random Forest Model in The Prediction of River Water Quality
Authors: Turuganti Venkateswarlu, Jagadeesh Anmala
Abstract:
Excessive runoffs from various non-point source land uses, and other point sources are rapidly contaminating the water quality of streams in the Upper Green River watershed, Kentucky, USA. It is essential to maintain the stream water quality as the river basin is one of the major freshwater sources in this province. It is also important to understand the water quality parameters (WQPs) quantitatively and qualitatively along with their important features as stream water is sensitive to climatic events and land-use practices. In this paper, a model was developed for predicting one of the significant WQPs, Fecal Coliform (FC) from precipitation, temperature, urban land use factor (ULUF), agricultural land use factor (ALUF), and forest land-use factor (FLUF) using Random Forest (RF) algorithm. The RF model, a novel ensemble learning algorithm, can even find out advanced feature importance characteristics from the given model inputs for different combinations. This model’s outcomes showed a good correlation between FC and climate events and land use factors (R2 = 0.94) and precipitation and temperature are the primary influencing factors for FC.Keywords: water quality, land use factors, random forest, fecal coliform
Procedia PDF Downloads 2015351 The Efficacy of Methylphenidate vs Atomoxetine in Treating Attention Deficit/Hyperactivity Disorder in Child and Adolescent
Authors: Gadia Duhita, Noorhana, Tjhin Wiguna
Abstract:
Background: ADHD is the most common behavioural disorder in Indonesia. A stimulant, specifically methylphenidate, has been the first drug of choice for an ADHD treatment more than half a century. During the last decade, non-stimulant therapy (atomoxetine) for ADHD treatment has been developing. Growing evidence of its efficacy and the difference in its side effects profile to stimulant therapy have made methylphenidate’s position as a first line therapy for ADHD in need of re-evaluation. Both methylphenidate and atomoxetine have proven themselves against placebos in reducing core symptoms of ADHD. More recent studies directly compare the efficacy of methylphenidate and atomoxetine. Objective: The objective of this paper is to find out if either methylphenidate or atomoxetine is superior to another. This paper will assess the validity, importance, and applicability of current available evidence which compare the effectivity, efficacy, and safety of methylphenidate to atomoxetine for treatment in children and adolescents with ADHD. Method: The articles were searched for through the PubMed and Cochrane databases with “attention deficit/hyperactivity disorder OR adhd”, “methylphenidate”, and “atomoxetine” as the search keywords. Two articles which were relevant and eligible were chosen by using inclusion and exclusion criterias to be critically appraised. Result: The study by Hazel et al. showed that the efficacy of methylphenidate and atomoxetine are comparable for treatment in child and adolescent ADHD. The result shows 53.6% (95% CI 48.5%-58.4%) of the patient responded to the treatment by atomoxetine and 54.4% (95% CI 47.6%-61.1%) patients responded to methylphenidate, with the difference in proportion of–0.9% (95% CI –9.2%-7.5%). The other study by Hanwella et al. also showed that the efficacy of atomoxetine was not inferior to metilphenidate (SMD = 0.09, 95% CI –0.08-0.26) (Z = 1.06, p = 0.29). However, the sub-group analysis showed that OROS methylphenidate is more effective compared to atomoxetine (SMD = 0.32, 95% CI 0.12-0.53) (Z = 3.05, p < 0.02). Conclusion: The efficacy of methylphenidate and atomoxetine in reducing symptoms of ADHD is comparable. None is proven inferior to another. The choice of pharmacological tratment children and adolescents with ADHD should be made based on contraindication and the side effects profile of each drug.Keywords: attention deficit/hyperactivity disorder, ADHD, atomoxetine, methylphenidate
Procedia PDF Downloads 4805350 Training of Future Computer Science Teachers Based on Machine Learning Methods
Authors: Meruert Serik, Nassipzhan Duisegaliyeva, Danara Tleumagambetova
Abstract:
The article highlights and describes the characteristic features of real-time face detection in images and videos using machine learning algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As a result, the advantages and disadvantages of Haar Cascade (Haar Cascade OpenCV), HoG SVM (Histogram of Oriented Gradients, Support Vector Machine), and MMOD CNN Dlib (Max-Margin Object Detection, convolutional neural network) detectors used for face detection were determined. Dlib is a general-purpose cross-platform software library written in the programming language C++. It includes detectors used for determining face detection. The Cascade OpenCV algorithm is efficient for fast face detection. The considered work forms the basis for the development of machine learning methods by future computer science teachers.Keywords: algorithm, artificial intelligence, education, machine learning
Procedia PDF Downloads 775349 Sinusoidal Roughness Elements in a Square Cavity
Authors: Muhammad Yousaf, Shoaib Usman
Abstract:
Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm basedon a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 103 to 106 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was16.66 percent at Ra number 105.Keywords: Lattice Boltzmann method, natural convection, nusselt number, rayleigh number, roughness
Procedia PDF Downloads 5305348 The Experimental House: A Case Study to Assess the Long-Term Performance of Waste Tires Used as Replacement for Natural Material in Backfill Applications for Basement Walls in Manitoba
Authors: M. Shokry Rashwan
Abstract:
This study follows a number of experiments conducted at Red River College (RRC) to investigate the short term properties of tire derived aggregate (TDA) produced from shredding off-the-road (OTR) wasted tires in a proposed new application. The application targets replacing natural material used under concrete slabs and as backfills for residential homes’ basement slabs and walls, respectively, with TDA. The experimental work included determining: compressibility, gradation distribution, unit weight, hydraulic conductivity and lateral pressure. Based on the results of those short term properties; it was decided to move forward to study the long-term performance of this otherwise waste material through on-site demonstration. A full-scale basement replicating a typical Manitoba home was therefore built at RRC where both TDA and Natural Materials (NM) were used side-by-side. A large number of sensing and measuring systems are used to compare between the performances of each material when exposed to the typical ground and weather conditions. Parameters monitored and measured include heat losses, moisture migration, drainage ability, lateral pressure, relative movements of slabs and walls, an integrity of ground water and radon emissions. Up-to-date results have confirmed part of the conclusions reached from the earlier laboratory experiments. However, other results have shown that construction practices; such as placing and compaction, may need some adjustments to achieve more desirable outcomes. This presentation provides a review of both short-term tests as well as up-to-date analysis of the on-site demonstration.Keywords: tire derived aggregate (TDA), basement construction, TDA material properties, lateral pressure of TDA, hydraulic conductivity of TDA
Procedia PDF Downloads 2175347 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 1285346 Comparative Analysis of in vitro Release profile for Escitalopram and Escitalopram Loaded Nanoparticles
Authors: Rashi Rajput, Manisha Singh
Abstract:
Escitalopram oxalate (ETP), an FDA approved antidepressant drug from the category of SSRI (selective serotonin reuptake inhibitor) and is used in treatment of general anxiety disorder (GAD), major depressive disorder (MDD).When taken orally, it is metabolized to S-demethylcitalopram (S-DCT) and S-didemethylcitalopram (S-DDCT) in the liver with the help of enzymes CYP2C19, CYP3A4 and CYP2D6. Hence, causing side effects such as dizziness, fast or irregular heartbeat, headache, nausea etc. Therefore, targeted and sustained drug delivery will be a helpful tool for increasing its efficacy and reducing side effects. The present study is designed for formulating mucoadhesive nanoparticle formulation for the same Escitalopram loaded polymeric nanoparticles were prepared by ionic gelation method and characterization of the optimised formulation was done by zeta average particle size (93.63nm), zeta potential (-1.89mV), TEM (range of 60nm to 115nm) analysis also confirms nanometric size range of the drug loaded nanoparticles along with polydispersibility index of 0.117. In this research, we have studied the in vitro drug release profile for ETP nanoparticles, through a semi permeable dialysis membrane. The three important characteristics affecting the drug release behaviour were – particle size, ionic strength and morphology of the optimised nanoparticles. The data showed that on increasing the particle size of the drug loaded nanoparticles, the initial burst was reduced which was comparatively higher in drug. Whereas, the formulation with 1mg/ml chitosan in 1.5mg/ml tripolyphosphate solution showed steady release over the entire period of drug release. Then this data was further validated through mathematical modelling to establish the mechanism of drug release kinetics, which showed a typical linear diffusion profile in optimised ETP loaded nanoparticles.Keywords: ionic gelation, mucoadhesive nanoparticle, semi-permeable dialysis membrane, zeta potential
Procedia PDF Downloads 2985345 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network
Authors: Gloria Patricia Manurung
Abstract:
Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization
Procedia PDF Downloads 2345344 The Influence of English Immersion Program on Academic Performance: Case Study at a Sino-US Cooperative University in China
Authors: Leah Li Echiverri, Haoyu Shang, Yue Li
Abstract:
Wenzhou-Kean University (WKU) is a Sino-US Cooperative University in China. It practices the English Immersion Program (EIP), where all the courses are taught in English. Class discussions and presentations are pervasively interwoven in designing students’ learning experiences. This WKU model has brought positive influences on students and is in some way ahead of traditional college English majors. However, literature to support the perceptions on the positive outcomes of this teaching and learning model remain scarce. The distinctive profile of Chinese-ESL students in an English Medium of Instruction (EMI) environment contributes further to the scarcity of literature compared to existing studies conducted among ESL learners in Western educational settings. Hence, the study investigated the students’ perceptions towards the English Immersion Program and determine how it influences Chinese-ESL students’ academic performance (AP). This research can provide empirical data that would be helpful to educators, teaching practitioners, university administrators, and other researchers in making informed decisions when developing curricular reforms, instructional and pedagogical methods, and university-wide support programs using this educational model. The purpose of the study was to establish the relationship between the English Immersion Program and Academic Performance among Chinese-ESL students enrolled at WKU for the academic year 2020-2021. Course length, immersion location, course type, and instructional design were the constructs of the English immersion program. English language learning, learning efficiency, and class participation were used to measure academic performance. Descriptive-correlational design was used in this cross-sectional research project. A quantitative approach for data analysis was applied to determine the relationship between the English immersion program and Chinese-ESL students’ academic performance. The research was conducted at WKU; a Chinese-American jointly established higher educational institution located in Wenzhou, Zhejiang province. Convenience, random, and snowball sampling of 283 students, a response rate of 10.5%, were applied to represent the WKU student population. The questionnaire was posted through the survey website named Wenjuanxing and shared to QQ or WeChat. Cronbach’s alpha was used to test the reliability of the research instrument. Findings revealed that when professors integrate technology (PowerPoint, videos, and audios) in teaching, students pay more attention. This contributes to the acquisition of more professional knowledge in their major courses. As to course immersion, students perceive WKU as a good place to study, providing them a high degree of confidence to talk with their professors in English. This also contributes to their English fluency and better pronunciation in their communication. In the construct of designing instruction, the use of pictures, video clips, and professors’ non-verbal communication, and demonstration of concern for students encouraged students to be more active in-class participation. Findings on course length and academic performance indicated that students’ perception regarding taking courses during fall and spring terms can moderately contribute to their academic performance. In conclusion, the findings revealed a significantly strong positive relationship between course type, immersion location, instructional design, and academic performance.Keywords: class participation, English immersion program, English language learning, learning efficiency
Procedia PDF Downloads 1775343 Computational Fluid Dynamic Modelling of the Desander: A Case Study from Pakistan
Authors: Ali Heidari, Hosain Ardalan
Abstract:
A CFD model was developed for a desander on the waterway of the Madyan Hydro Power Plant (MHPP), which is under construction in northeast Pakistan. An underground desander was designed to settle the sediments before the headrace tunnel, which is 14 km long. The desander chamber consists of 2 caverns, each including 2 basins with flushing-type desander, adopted in the feasibility design on the left bank of the river. A 3D flow simulation was developed to interpret the desander performance according to flow velocity. Then, a particle-based model was developed to check the sediment particle sizes in different areas of the desander. 11 Scenarios were defined for different configurations of the desander, including the transition vertical slope, symmetric and asymmetric entrance, the basin net length, and tranquilizer racks specifications. The model's runtime using a medium-class supper computer was several days for each scenario because of the required time interval for the defined pixel size of the 3D model. It also needed to extend the duration time of the modeling to the travel time of sediment particles along the desander. The results of the 3D models for different entrance transition slopes showed that a high slope transition zone is not acceptable due to the turbulence/vortex at the transition. The sediment drainage channel was extended to the transition with an expanding side slope upstream to have a better trapping performance for bigger particles. The desander configuration and the net length were modeled in different scenarios to reach the design particle size removal criteria of 0.2 and 0.3 mm. The results show that the desander design configuration in the feasibility stage with a net length of 204 meters and transition angle of 34° is an overdesign configuration. On the other hand, reducing the desander net length to less than 135 meters does not fulfill the design criterion of 0.2 mm particle size removal. The Scenarios included asymmetric and symmetric entrance transition zone configurations for the four basins. The CFD results confirmed the symmetric desander configuration, with a net length of 135 m and a transition angle of 34° to the horizon, as the optimum configuration. The configuration provides a removal efficiency of 97% for a particle size of 0.2 mm. The CFD results also show that horizontal tranquilizing racks are risky and do not help sediment trapping in the basin. However, the horizontally inclined tranquilizer decreases the turbulence by transferring the flow energy into the main basin. Nonetheless, more evaluation is needed to optimize the transition zone length by using a tranquilizer at the entrance and evaluating the tranquilizer racks with vertical alignments by building a convenient physical model.Keywords: CFD, sediment, desander, madyan
Procedia PDF Downloads 35342 Intrusion Detection and Prevention System (IDPS) in Cloud Computing Using Anomaly-Based and Signature-Based Detection Techniques
Authors: John Onyima, Ikechukwu Ezepue
Abstract:
Virtualization and cloud computing are among the fast-growing computing innovations in recent times. Organisations all over the world are moving their computing services towards the cloud this is because of its rapid transformation of the organization’s infrastructure and improvement of efficient resource utilization and cost reduction. However, this technology brings new security threats and challenges about safety, reliability and data confidentiality. Evidently, no single security technique can guarantee security or protection against malicious attacks on a cloud computing network hence an integrated model of intrusion detection and prevention system has been proposed. Anomaly-based and signature-based detection techniques will be integrated to enable the network and its host defend themselves with some level of intelligence. The anomaly-base detection was implemented using the local deviation factor graph-based (LDFGB) algorithm while the signature-based detection was implemented using the snort algorithm. Results from this collaborative intrusion detection and prevention techniques show robust and efficient security architecture for cloud computing networks.Keywords: anomaly-based detection, cloud computing, intrusion detection, intrusion prevention, signature-based detection
Procedia PDF Downloads 3145341 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 1335340 Interaction or Conflict: Addressing Modern Trans-Himalayan Pastoralism and Wildlife
Authors: Amit Kaushik
Abstract:
Equus kiang kiang is an indigenous large-bodied herbivore species, and in India, it is restricted to limited geographies of Ladakh. One of such areas is the Tsokar Basin. With the rise in global pashmina demand, the livestock numbers have grown significantly. Previous studies have reported conflict between a nomadic pastoral community, the Changpas, and kiang. Absentee pastoralism (in lieu of pure pastoralism) and tourism are two major economic activities among the local people. However, the social, economic, political, and ecological changes are inevitable in such a contemporary system. The study examines several factors influencing the local pastoral economy and focuses on the presence of two non-human cohabitants, kiang, and the wolf. This study used semi-structured interviews and vehicle count method in four different seasons. The results show that people perceived kiang as a threat but also reveal a level of tolerance towards them. The locals predicted high kiang numbers ranging from 200-3000 in the basin and contrastingly ranked them behind wolves, which are very few in numbers. Due to a lack of scientific pieces of evidence, the kiang population status remains obscure, and local peoples’ concerns remain unaddressed. But how this competitive dysfunctionality does take place? On one side, the rural development or the animal husbandry department aims at developing the area by providing stall-feed and tourism, whereas, on another side, the wildlife department emphasizes wildlife conservation. Therefore, the managers and planners may need to be cautious about the local socio-ecological complexities and may require inter-departmental communications. The study concludes that an interdisciplinary inquiry may be an important tool in understanding such a precarious situation and may be used in the policy-making processes.Keywords: coexistence, human-livestock-wildlife interactions, interdisciplinary approach, kiang, policymaking, tsokar.
Procedia PDF Downloads 1435339 Advanced Exergetic Analysis: Decomposition Method Applied to a Membrane-Based Hard Coal Oxyfuel Power Plant
Authors: Renzo Castillo, George Tsatsaronis
Abstract:
High-temperature ceramic membranes for air separation represents an important option to reduce the significant efficiency drops incurred in state-of-the-art cryogenic air separation for high tonnage oxygen production required in oxyfuel power stations. This study is focused on the thermodynamic analysis of two power plant model designs: the state-of-the-art supercritical 600ᵒC hard coal plant (reference power plant Nordrhein-Westfalen) and the membrane-based oxyfuel concept implemented in this reference plant. In the latter case, the oxygen is separated through a mixed-conducting hollow fiber perovskite membrane unit in the three-end operation mode, which has been simulated under vacuum conditions on the permeate side and at high-pressure conditions on the feed side. The thermodynamic performance of each plant concept is assessed by conventional exergetic analysis, which determines location, magnitude and sources of efficiency losses, and advanced exergetic analysis, where endogenous/exogenous and avoidable/unavoidable parts of exergy destruction are calculated at the component and full process level. These calculations identify thermodynamic interdependencies among components and reveal the real potential for efficiency improvements. The endogenous and exogenous exergy destruction portions are calculated by the decomposition method, a recently developed straightforward methodology, which is suitable for complex power stations with a large number of process components. Lastly, an improvement priority ranking for relevant components, as well as suggested changes in process layouts are presented for both power stations.Keywords: exergy, carbon capture and storage, ceramic membranes, perovskite, oxyfuel combustion
Procedia PDF Downloads 1895338 Design an Development of an Agorithm for Prioritizing the Test Cases Using Neural Network as Classifier
Authors: Amit Verma, Simranjeet Kaur, Sandeep Kaur
Abstract:
Test Case Prioritization (TCP) has gained wide spread acceptance as it often results in good quality software free from defects. Due to the increase in rate of faults in software traditional techniques for prioritization results in increased cost and time. Main challenge in TCP is difficulty in manually validate the priorities of different test cases due to large size of test suites and no more emphasis are made to make the TCP process automate. The objective of this paper is to detect the priorities of different test cases using an artificial neural network which helps to predict the correct priorities with the help of back propagation algorithm. In our proposed work one such method is implemented in which priorities are assigned to different test cases based on their frequency. After assigning the priorities ANN predicts whether correct priority is assigned to every test case or not otherwise it generates the interrupt when wrong priority is assigned. In order to classify the different priority test cases classifiers are used. Proposed algorithm is very effective as it reduces the complexity with robust efficiency and makes the process automated to prioritize the test cases.Keywords: test case prioritization, classification, artificial neural networks, TF-IDF
Procedia PDF Downloads 4025337 Numerical Method for Productivity Prediction of Water-Producing Gas Well with Complex 3D Fractures: Case Study of Xujiahe Gas Well in Sichuan Basin
Authors: Hong Li, Haiyang Yu, Shiqing Cheng, Nai Cao, Zhiliang Shi
Abstract:
Unconventional resources have gradually become the main direction for oil and gas exploration and development. However, the productivity of gas wells, the level of water production, and the seepage law in tight fractured gas reservoirs are very different. These are the reasons why production prediction is so difficult. Firstly, a three-dimensional multi-scale fracture and multiphase mathematical model based on an embedded discrete fracture model (EDFM) is established. And the material balance method is used to calculate the water body multiple according to the production performance characteristics of water-producing gas well. This will help construct a 'virtual water body'. Based on these, this paper presents a numerical simulation process that can adapt to different production modes of gas wells. The research results show that fractures have a double-sided effect. The positive side is that it can increase the initial production capacity, but the negative side is that it can connect to the water body, which will lead to the gas production drop and the water production rise both rapidly, showing a 'scissor-like' characteristic. It is worth noting that fractures with different angles have different abilities to connect with the water body. The higher the angle of gas well development, the earlier the water maybe break through. When the reservoir is a single layer, there may be a stable production period without water before the fractures connect with the water body. Once connected, a 'scissors shape' will appear. If the reservoir has multiple layers, the gas and water will produce at the same time. The above gas-water relationship can be matched with the gas well production date of the Xujiahe gas reservoir in the Sichuan Basin. This method is used to predict the productivity of a well with hydraulic fractures in this gas reservoir, and the prediction results are in agreement with on-site production data by more than 90%. It shows that this research idea has great potential in the productivity prediction of water-producing gas wells. Early prediction results are of great significance to guide the design of development plans.Keywords: EDFM, multiphase, multilayer, water body
Procedia PDF Downloads 1985336 A Biologically Inspired Approach to Automatic Classification of Textile Fabric Prints Based On Both Texture and Colour Information
Authors: Babar Khan, Wang Zhijie
Abstract:
Machine Vision has been playing a significant role in Industrial Automation, to imitate the wide variety of human functions, providing improved safety, reduced labour cost, the elimination of human error and/or subjective judgments, and the creation of timely statistical product data. Despite the intensive research, there have not been any attempts to classify fabric prints based on printed texture and colour, most of the researches so far encompasses only black and white or grey scale images. We proposed a biologically inspired processing architecture to classify fabrics w.r.t. the fabric print texture and colour. We created a texture descriptor based on the HMAX model for machine vision, and incorporated colour descriptor based on opponent colour channels simulating the single opponent and double opponent neuronal function of the brain. We found that our algorithm not only outperformed the original HMAX algorithm on classification of fabric print texture and colour, but we also achieved a recognition accuracy of 85-100% on different colour and different texture fabric.Keywords: automatic classification, texture descriptor, colour descriptor, opponent colour channel
Procedia PDF Downloads 4905335 A Mixture Vine Copula Structures Model for Dependence Wind Speed among Wind Farms and Its Application in Reactive Power Optimization
Authors: Yibin Qiu, Yubo Ouyang, Shihan Li, Guorui Zhang, Qi Li, Weirong Chen
Abstract:
This paper aims at exploring the impacts of high dimensional dependencies of wind speed among wind farms on probabilistic optimal power flow. To obtain the reactive power optimization faster and more accurately, a mixture vine Copula structure model combining the K-means clustering, C vine copula and D vine copula is proposed in this paper, through which a more accurate correlation model can be obtained. Moreover, a Modified Backtracking Search Algorithm (MBSA), the three-point estimate method is applied to probabilistic optimal power flow. The validity of the mixture vine copula structure model and the MBSA are respectively tested in IEEE30 node system with measured data of 3 adjacent wind farms in a certain area, and the results indicate effectiveness of these methods.Keywords: mixture vine copula structure model, three-point estimate method, the probability integral transform, modified backtracking search algorithm, reactive power optimization
Procedia PDF Downloads 2515334 Worst-Case Load Shedding in Electric Power Networks
Authors: Fu Lin
Abstract:
We consider the worst-case load-shedding problem in electric power networks where a number of transmission lines are to be taken out of service. The objective is to identify a prespecified number of line outages that lead to the maximum interruption of power generation and load at the transmission level, subject to the active power-flow model, the load and generation capacity of the buses, and the phase-angle limit across the transmission lines. For this nonlinear model with binary constraints, we show that all decision variables are separable except for the nonlinear power-flow equations. We develop an iterative decomposition algorithm, which converts the worst-case load shedding problem into a sequence of small subproblems. We show that the subproblems are either convex problems that can be solved efficiently or nonconvex problems that have closed-form solutions. Consequently, our approach is scalable for large networks. Furthermore, we prove the convergence of our algorithm to a critical point, and the objective value is guaranteed to decrease throughout the iterations. Numerical experiments with IEEE test cases demonstrate the effectiveness of the developed approach.Keywords: load shedding, power system, proximal alternating linearization method, vulnerability analysis
Procedia PDF Downloads 1445333 Analysis of Noise Environment and Acoustics Material in Residential Building
Authors: Heruanda Alviana Giska Barabah, Hilda Rasnia Hapsari
Abstract:
Acoustic phenomena create an acoustic interpretation condition that describes the characteristics of the environment. In urban areas, the tendency of heterogeneous and simultaneous human activity form a soundscape that is different from other regions, one of the characteristics of urban areas that developing the soundscape is the presence of vertical model houses or residential building. Activities both within the building and surrounding environment are able to make the soundscape with certain characteristics. The acoustics comfort of residential building becomes an important aspect, those demand lead the building features become more diverse. Initial steps in mapping acoustic conditions in a soundscape are important, this is the method to determine uncomfortable condition. Noise generated by road traffic, railway, and plane is an important consideration, especially for urban people, therefore the proper design of the building becomes very important as an effort to bring appropriate acoustics comfort. In this paper the authors developed noise mapping on the location of the residential building. Mapping done by taking some point referring to the noise source. The mapping result become the basis for modeling the acoustics wave interacted with the building model. Material selection is done based on literature study and modeling simulation using Insul by considering the absorption coefficient and Sound Transmission Class. The analysis of acoustics rays is ray tracing method using Comsol simulator software that can show the movement of acoustics rays and their interaction with a boundary. The result of this study can be used to consider boundary material in residential building as well as consideration for improving the acoustic quality in the acoustics zones that are formed.Keywords: residential building, noise, absorption coefficient, sound transmission class, ray tracing
Procedia PDF Downloads 2515332 Antagonistic Activity of Streptococcus Salivarius K12 Against Pathogenic and Opportunistic Microorganisms
Authors: Andreev V. A., Kovalenko T. N., Privolnev V. V., Chernavin A. V., Knyazeva E. R.
Abstract:
Aim: To evaluate the antagonistic activity of Streptococcus salivarius K12 (SsK12) against ENT and oral cavity infection pathogens (S. pneumoniae, S. pyogenes, S. aureus), gram-negative bacteria (E. coli, P. aeruginosa) and C. albicans. Materials and methods: The probiotic strain SsK12 was isolated from the dietary supplement containing at least 1 × 109 CFU per tablet. The tablet was dissolved in the enrichment broth. The resulting suspension was seeded on 5% blood agar and incubated at 35°C in 4-6% CO2 for 48 hours. The raised culture was identified as Streptococcus salivarius with MALDI-TOF mass spectrometry method. The evaluation of SsK12 antagonistic activity was carried out using a perpendicular streak technique. The daily SsK12 culture was inoculated as heavy streaks with a loop at one side of Petri dish with the Muller-Hinton agar (MHA) and incubated for 24 hours at 350 C in anaerobic conditions. It was supposed that bacteriocins would diffuse over the whole area of the agar media. On the next day S. pneumoniae, S. pyogenes, S. aureus, E. coli, P. aeruginosa and C. albicans clinical isolates were streaked at the clear side of MHA Petri dish. MHA Petri dish inoculated with SsK12 (one part) and with the respective clinical isolates (another part) streaked perpendicularly on the same day was used as the control. Results: There was no growth of S. pyogenes on the Petri dish with SsK12 daily culture; the growth of a few colonies of S. pneumonia was noted. The growth of S. aureus, E. coli, P. aeruginosa and C. albicans was noted along the inoculated streak. On the control Petri dish with simultaneous inoculating of the SsK12 strain and the test cultures, the growth of all the testes isolates was noted. Conclusions: (1) SsK12 possesses perfect antagonistic activity against S. pyogenes and good activity against S. pneumoniae. (2) There was no antagonistic activity of SsK12 against S. aureus, E. coli, P. aeruginosa and C. albicans. (3) SsK12 antagonistic properties make it possible to use this probiotic strain for prophylaxis of recurrent ENT infections.Keywords: probiotics, SsK12, streptococcus salivarius K12, antagonistic activity
Procedia PDF Downloads 62