Search results for: action based method
39154 Sexting Phenomenon in Educational Settings: A Data Mining Approach
Authors: Koutsopoulou Ioanna, Gkintoni Evgenia, Halkiopoulos Constantinos, Antonopoulou Hera
Abstract:
Recent advances in Internet Computer Technology (ICT) and the ever-increasing use of technological equipment amongst adolescents and young adults along with unattended access to the internet and social media and uncontrolled use of smart phones and PCs have caused social problems like sexting to emerge. The main purpose of the present article is first to present an analytic theoretical framework of sexting as a recent social phenomenon based on studies that have been conducted the last decade or so; and second to investigate Greek students’ and also social network users, sexting perceptions and to record how often social media users exchange sexual messages and to retrace demographic variables predictors. Data from 1,000 students were collected and analyzed and all statistical analysis was done by the software package WEKA. The results indicate among others, that the use of data mining methods is an important tool to draw conclusions that could affect decision and policy making especially in the field and related social topics of educational psychology. To sum up, sexting lurks many risks for adolescents and young adults students in Greece and needs to be better addressed in relevance to the stakeholders as well as society in general. Furthermore, policy makers, legislation makers and authorities will have to take action to protect minors. Prevention strategies based on Greek cultural specificities are being proposed. This social problem has raised concerns in recent years and will most likely escalate concerns in global communities in the future.Keywords: educational ethics, sexting, Greek sexters, sex education, data mining
Procedia PDF Downloads 18239153 A Numerical Description of a Fibre Reinforced Concrete Using a Genetic Algorithm
Authors: Henrik L. Funke, Lars Ulke-Winter, Sandra Gelbrich, Lothar Kroll
Abstract:
This work reports about an approach for an automatic adaptation of concrete formulations based on genetic algorithms (GA) to optimize a wide range of different fit-functions. In order to achieve the goal, a method was developed which provides a numerical description of a fibre reinforced concrete (FRC) mixture regarding the production technology and the property spectrum of the concrete. In a first step, the FRC mixture with seven fixed components was characterized by varying amounts of the components. For that purpose, ten concrete mixtures were prepared and tested. The testing procedure comprised flow spread, compressive and bending tensile strength. The analysis and approximation of the determined data was carried out by GAs. The aim was to obtain a closed mathematical expression which best describes the given seven-point cloud of FRC by applying a Gene Expression Programming with Free Coefficients (GEP-FC) strategy. The seven-parametric FRC-mixtures model which is generated according to this method correlated well with the measured data. The developed procedure can be used for concrete mixtures finding closed mathematical expressions, which are based on the measured data.Keywords: concrete design, fibre reinforced concrete, genetic algorithms, GEP-FC
Procedia PDF Downloads 28039152 Implicit Off-Grid Block Method for Solving Fourth and Fifth Order Ordinary Differential Equations Directly
Authors: Olusola Ezekiel Abolarin, Gift E. Noah
Abstract:
This research work considered an innovative procedure to numerically approximate higher-order Initial value problems (IVP) of ordinary differential equations (ODE) using the Legendre polynomial as the basis function. The proposed method is a half-step, self-starting Block integrator employed to approximate fourth and fifth order IVPs without reduction to lower order. The method was developed through a collocation and interpolation approach. The basic properties of the method, such as convergence, consistency and stability, were well investigated. Several test problems were considered, and the results compared favorably with both exact solutions and other existing methods.Keywords: initial value problem, ordinary differential equation, implicit off-grid block method, collocation, interpolation
Procedia PDF Downloads 8439151 First Order Reversal Curve Method for Characterization of Magnetic Nanostructures
Authors: Bashara Want
Abstract:
One of the key factors limiting the performance of magnetic memory is that the coercivity has a distribution with finite width, and the reversal starts at the weakest link in the distribution. So one must first know the distribution of coercivities in order to learn how to reduce the width of distribution and increase the coercivity field to obtain a system with narrow width. First Order Reversal Curve (FORC) method characterizes a system with hysteresis via the distribution of local coercivities and, in addition, the local interaction field. The method is more versatile than usual conventional major hysteresis loops that give only the statistical behaviour of the magnetic system. The FORC method will be presented and discussed at the conference.Keywords: magnetic materials, hysteresis, first-order reversal curve method, nanostructures
Procedia PDF Downloads 8239150 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 8139149 Solid Lipid Nanoparticles of Levamisole Hydrochloride
Authors: Surendra Agrawal, Pravina Gurjar, Supriya Bhide, Ram Gaud
Abstract:
Levamisole hydrochloride is a prominent anticancer drug in the treatment of colon cancer but resulted in toxic effects due poor bioavailability and poor cellular uptake by tumor cells. Levamisole is an unstable drug. Incorporation of this molecule in solid lipids may minimize their exposure to the aqueous environment and partly immobilize the drug molecules within the lipid matrix-both of which may protect the encapsulated drugs against degradation. The objectives of the study were to enhance bioavailability by sustaining drug release and to reduce the toxicities associated with the therapy. Solubility of the drug was determined in different lipids to select the components of Solid Lipid Nanoparticles (SLN). Pseudoternary phase diagrams were created using aqueous titration method. Formulations were subjected to particle size and stability evaluation to select the final test formulations which were characterized for average particle size, zeta potential, and in-vitro drug release and percentage transmittance to optimize the final formulation. SLN of Levamisole hydrochloride was prepared by Nanoprecipitation method. Glyceryl behenate (Compritol 888 ATO) was used as core comprising of Tween 80 as surfactant and Lecithin as co-surfactant in (1:1) ratio. Entrapment efficiency (EE) was found to be 45.89%. Particle size was found in the range of 100-600 nm. Zeta potential of the formulation was -17.0 mV revealing the stability of the product. In-vitro release study showed that 66 % drug released in 24 hours in pH 7.2 which represent that formulation can give controlled action at the intestinal environment. In pH 5.0 it showed 64% release indicating that it can even release drug in acidic environment of tumor cells. In conclusion, results revealed SLN to be a promising approach to sustain the drug release so as to increase bioavailability and cellular uptake of the drug with reduction in toxic effects as dose has been reduced with controlled delivery.Keywords: SLN, nanoparticulate delivery of levamisole, pharmacy, pharmaceutical sciences
Procedia PDF Downloads 43139148 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method
Authors: A.R. Eskandari, M.R. Eskandari
Abstract:
A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)
Procedia PDF Downloads 38739147 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity
Procedia PDF Downloads 15239146 Value Chain Based New Business Opportunity
Authors: Seonjae Lee, Sungjoo Lee
Abstract:
Excavation is necessary to remain competitive in the current business environment. The company survived the rapidly changing industry conditions by adapting new business strategy and reducing technology challenges. Traditionally, the two methods are conducted excavations for new businesses. The first method is, qualitative analysis of expert opinion, which is gathered through opportunities and secondly, new technologies are discovered through quantitative data analysis of method patents. The second method increases time and cost. Patent data is restricted for use and the purpose of discovering business opportunities. This study presents the company's characteristics (sector, size, etc.), of new business opportunities in customized form by reviewing the value chain perspective and to contributing to creating new business opportunities in the proposed model. It utilizes the trademark database of the Korean Intellectual Property Office (KIPO) and proprietary company information database of the Korea Enterprise Data (KED). This data is key to discovering new business opportunities with analysis of competitors and advanced business trademarks (Module 1) and trading analysis of competitors found in the KED (Module 2).Keywords: value chain, trademark, trading analysis, new business opportunity
Procedia PDF Downloads 37239145 An Observer-Based Direct Adaptive Fuzzy Sliding Control with Adjustable Membership Functions
Authors: Alireza Gholami, Amir H. D. Markazi
Abstract:
In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.Keywords: adaptive algorithm, fuzzy systems, membership functions, observer
Procedia PDF Downloads 20639144 Monitoring Saltwater Corrosion on Steel Samples Using Coda Wave Interferometry in MHZ Frequencies
Authors: Maxime Farin, Emmanuel Moulin, Lynda Chehami, Farouk Benmeddour, Pierre Campistron
Abstract:
Assessing corrosion is crucial in the petrochemical and marine industry. Usual ultrasonic methods based on guided waves to detect corrosion can inspect large areas but lack precision. We propose a complementary and sensitive ultrasonic method (~ 10 MHz) based on coda wave interferometry to detect and quantify corrosion at the surface of a steel sample. The method relies on a single piezoelectric transducer, exciting the sample and measuring the scattered coda signals at different instants in time. A laboratory experiment is conducted with a steel sample immersed in salted water for 60~h with parallel coda and temperature measurements to correct coda dependence to temperature variations. Micrometric changes to the sample surface caused by corrosion are detected in the late coda signals, allowing precise corrosion detection. Moreover, a good correlation is found between a parameter quantifying the temperature-corrected stretching of the coda over time with respect to a reference without corrosion and the corrosion surface over the sample recorded with a camera.Keywords: coda wave interferometry, nondestructive evaluation, corrosion, ultrasonics
Procedia PDF Downloads 23439143 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter
Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri
Abstract:
Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion
Procedia PDF Downloads 69639142 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment
Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM
Procedia PDF Downloads 11639141 Mobile Platform’s Attitude Determination Based on Smoothed GPS Code Data and Carrier-Phase Measurements
Authors: Mohamed Ramdani, Hassen Abdellaoui, Abdenour Boudrassen
Abstract:
Mobile platform’s attitude estimation approaches mainly based on combined positioning techniques and developed algorithms; which aim to reach a fast and accurate solution. In this work, we describe the design and the implementation of an attitude determination (AD) process, using only measurements from GPS sensors. The major issue is based on smoothed GPS code data using Hatch filter and raw carrier-phase measurements integrated into attitude algorithm based on vectors measurement using least squares (LSQ) estimation method. GPS dataset from a static experiment is used to investigate the effectiveness of the presented approach and consequently to check the accuracy of the attitude estimation algorithm. Attitude results from GPS multi-antenna over short baselines are introduced and analyzed. The 3D accuracy of estimated attitude parameters using smoothed measurements is over 0.27°.Keywords: attitude determination, GPS code data smoothing, hatch filter, carrier-phase measurements, least-squares attitude estimation
Procedia PDF Downloads 15539140 A Study on the Establishment of a 4-Joint Based Motion Capture System and Data Acquisition
Authors: Kyeong-Ri Ko, Seong Bong Bae, Jang Sik Choi, Sung Bum Pan
Abstract:
A simple method for testing the posture imbalance of the human body is to check for differences in the bilateral shoulder and pelvic height of the target. In this paper, to check for spinal disorders the authors have studied ways to establish a motion capture system to obtain and express motions of 4-joints, and to acquire data based on this system. The 4 sensors are attached to the both shoulders and pelvis. To verify the established system, the normal and abnormal postures of the targets listening to a lecture were obtained using the established 4-joint based motion capture system. From the results, it was confirmed that the motions taken by the target was identical to the 3-dimensional simulation.Keywords: inertial sensor, motion capture, motion data acquisition, posture imbalance
Procedia PDF Downloads 51539139 Numerical Simulation of Two-Phase Flows Using a Pressure-Based Solver
Authors: Lei Zhang, Jean-Michel Ghidaglia, Anela Kumbaro
Abstract:
This work focuses on numerical simulation of two-phase flows based on the bi-fluid six-equation model widely used in many industrial areas, such as nuclear power plant safety analysis. A pressure-based numerical method is adopted in our studies due to the fact that in two-phase flows, it is common to have a large range of Mach numbers because of the mixture of liquid and gas, and density-based solvers experience stiffness problems as well as a loss of accuracy when approaching the low Mach number limit. This work extends the semi-implicit pressure solver in the nuclear component CUPID code, where the governing equations are solved on unstructured grids with co-located variables to accommodate complicated geometries. A conservative version of the solver is developed in order to capture exactly the shock in one-phase flows, and is extended to two-phase situations. An inter-facial pressure term is added to the bi-fluid model to make the system hyperbolic and to establish a well-posed mathematical problem that will allow us to obtain convergent solutions with refined meshes. The ability of the numerical method to treat phase appearance and disappearance as well as the behavior of the scheme at low Mach numbers will be demonstrated through several numerical results. Finally, inter-facial mass and heat transfer models are included to deal with situations when mass and energy transfer between phases is important, and associated industrial numerical benchmarks with tabulated EOS (equations of state) for fluids are performed.Keywords: two-phase flows, numerical simulation, bi-fluid model, unstructured grids, phase appearance and disappearance
Procedia PDF Downloads 39439138 Parameter Estimation of Additive Genetic and Unique Environment (AE) Model on Diabetes Mellitus Type 2 Using Bayesian Method
Authors: Andi Darmawan, Dewi Retno Sari Saputro, Purnami Widyaningsih
Abstract:
Diabetes mellitus (DM) is a chronic disease in human that occurred if pancreas cannot produce enough of insulin hormone or the body uses ineffectively insulin hormone which causes increasing level of glucose in the blood, or it was called hyperglycemia. In Indonesia, DM is a serious disease on health because it can cause blindness, kidney disease, diabetic feet (gangrene), and stroke. The type of DM criteria can also be divided based on the main causes; they are DM type 1, type 2, and gestational. Diabetes type 1 or previously known as insulin-independent diabetes is due to a lack of production of insulin hormone. Diabetes type 2 or previously known as non-insulin dependent diabetes is due to ineffective use of insulin while gestational diabetes is a hyperglycemia that found during pregnancy. The most one type commonly found in patient is DM type 2. The main factors of this disease are genetic (A) and life style (E). Those disease with 2 factors can be constructed with additive genetic and unique environment (AE) model. In this article was discussed parameter estimation of AE model using Bayesian method and the inheritance character simulation on parent-offspring. On the AE model, there are response variable, predictor variables, and parameters were capable of representing the number of population on research. The population can be measured through a taken random sample. The response and predictor variables can be determined by sample while the parameters are unknown, so it was required to estimate the parameters based on the sample. Estimation of AE model parameters was obtained based on a joint posterior distribution. The simulation was conducted to get the value of genetic variance and life style variance. The results of simulation are 0.3600 for genetic variance and 0.0899 for life style variance. Therefore, the variance of genetic factor in DM type 2 is greater than life style.Keywords: AE model, Bayesian method, diabetes mellitus type 2, genetic, life style
Procedia PDF Downloads 28439137 Group Decision Making through Interval-Valued Intuitionistic Fuzzy Soft Set TOPSIS Method Using New Hybrid Score Function
Authors: Syed Talib Abbas Raza, Tahseen Ahmed Jilani, Saleem Abdullah
Abstract:
This paper presents interval-valued intuitionistic fuzzy soft sets based TOPSIS method for group decision making. The interval-valued intuitionistic fuzzy soft set is a mutation of an interval-valued intuitionistic fuzzy set and soft set. In group decision making problems IVIFSS makes the process much more algebraically elegant. We have used weighted arithmetic averaging operator for aggregating the information and define a new Hybrid Score Function as metric tool for comparison between interval-valued intuitionistic fuzzy values. In an illustrative example we have applied the developed method to a criminological problem. We have developed a group decision making model for integrating the imprecise and hesitant evaluations of multiple law enforcement agencies working on target killing cases in the country.Keywords: group decision making, interval-valued intuitionistic fuzzy soft set, TOPSIS, score function, criminology
Procedia PDF Downloads 60439136 An Overview of Posterior Fossa Associated Pathologies and Segmentation
Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets
Abstract:
Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.Keywords: chiari, posterior fossa, segmentation, volumetric
Procedia PDF Downloads 10639135 An Efficient Algorithm of Time Step Control for Error Correction Method
Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim
Abstract:
The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points
Procedia PDF Downloads 57339134 Time-Frequency Feature Extraction Method Based on Micro-Doppler Signature of Ground Moving Targets
Authors: Ke Ren, Huiruo Shi, Linsen Li, Baoshuai Wang, Yu Zhou
Abstract:
Since some discriminative features are required for ground moving targets classification, we propose a new feature extraction method based on micro-Doppler signature. Firstly, the time-frequency analysis of measured data indicates that the time-frequency spectrograms of the three kinds of ground moving targets, i.e., single walking person, two people walking and a moving wheeled vehicle, are discriminative. Then, a three-dimensional time-frequency feature vector is extracted from the time-frequency spectrograms to depict these differences. At last, a Support Vector Machine (SVM) classifier is trained with the proposed three-dimensional feature vector. The classification accuracy to categorize ground moving targets into the three kinds of the measured data is found to be over 96%, which demonstrates the good discriminative ability of the proposed micro-Doppler feature.Keywords: micro-doppler, time-frequency analysis, feature extraction, radar target classification
Procedia PDF Downloads 40539133 Learning to Learn: A Course on Language Learning Strategies
Authors: Hélène Knoerr
Abstract:
In an increasingly global world, more and more international students attend academic courses and programs in a second or foreign language, and local students register in language learning classes in order to improve their employability. These students need to quickly become proficient in the new language. How can we, as administrators, curriculum developers and teachers, make sure that they have the tools they need in order to develop their language skills in an academic context? This paper will describe the development and implementation of a new course, Learning to learn, as part of the Major in French/English as a Second Language at the University of Ottawa. This academic program was recently completely overhauled in order to reflect the current approaches in language learning (more specifically, the action-oriented approach as embodied in the Common European Framework of Reference for Languages, and the concept of life-long autonomous learning). The course itself is based on research on language learning strategies, with a particular focus on the characteristics of the “good language learner”. We will present the methodological and pedagogical foundations, describe the course objectives and learning outcomes, the language learning strategies, and the classroom activities. The paper will conclude with students’ feedback and suggest avenues for further exploration.Keywords: curriculum development, language learning, learning strategies, second language
Procedia PDF Downloads 41139132 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability
Authors: Rui Calejo Rodrigues
Abstract:
Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation
Procedia PDF Downloads 20539131 Internet of Things-Based Electric Vehicle Charging Notification
Authors: Nagarjuna Pitty
Abstract:
It is believed invention “Advanced Method and Process Quick Electric Vehicle Charging” is an Electric Vehicles (EVs) are quickly turning into the heralds of vehicle innovation. This study endeavors to address the inquiries of how module charging process correspondence has been performed between the EV and Electric Vehicle Supply Equipment (EVSE). The energy utilization of gas-powered motors is higher than that of electric engines. An invention is related to an Advanced Method and Process Quick Electric Vehicle Charging. In this research paper, readings on the electric vehicle charging approaches will be checked, and the module charging phases will be described comprehensively.Keywords: electric, vehicle, charging, notification, IoT, supply, equipment
Procedia PDF Downloads 7139130 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method
Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili
Abstract:
The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method
Procedia PDF Downloads 19839129 Students’ Perceptions of Well-Being and School-Based Well-Being Programs and Interventions
Authors: Amanda Madden
Abstract:
The purpose of this research was to identify students understanding of well-being and perceptions of the effective components of school-based well-being programs they have participated in during their time in secondary school. With one in four adolescents suffering from some form of mental health disorder, which has the potential to directly impact their academic ability, schools have moved towards a more holistic approach to education, resulting in the growth of school-based well-being programs. There is limited research on the effectiveness of school-based well-being programs, with fewer studies examining students’ perspectives on their well-being. A mixed-method design was utilized, framed by a social constructivist methodology. Quantitative data was collected through a researcher-developed self-report survey, and qualitative data were collected through one-on-one interviews and a semi-structured focus group undertaken with Year 12 students from three independent co-educational schools in Western Australia. Preliminary findings indicate that participants have experienced a minimal impact, either positively or negatively, on their well-being from school-based well-being programs. The data detailed that adolescents consider happiness, positive attitude, good physical health, balance, emotional fulfillment and confidence components of well-being. The findings also highlighted sports, positive family relationships, positive friendships, and pets positively enhanced well-being. This research suggests that researchers and educational leaders should consider students’ understanding of well-being in the development of school-based well-being assessments and interventions. Students are the recipients of school-based well-being programs and are best placed to inform what they will and will not respond to in the determination of appropriate well-being content.Keywords: wellbeing, school based wellbeing, adolescents, wellbeing interventions
Procedia PDF Downloads 7239128 Risk-Based Regulation as a Model of Control in the South African Meat Industry
Authors: R. Govender, T. C. Katsande, E. Madoroba, N. M. Thiebaut, D. Naidoo
Abstract:
South African control over meat safety is managed by the Department of Agriculture, Forestry and Fisheries (DAFF). Veterinary services department in each of the nine provinces in the country is tasked with overseeing the farm and abattoir segments of the meat supply chain. Abattoirs are privately owned. The number of abattoirs over the years has increased. This increase has placed constraints on government resources required to monitor these abattoirs. This paper presents empirical research results on the hygienic processing of meat in high and low throughout abattoirs. This paper presents a case for the adoption of risk-based regulation as a method of government control over hygiene and safe meat processing at abattoirs in South Africa. Recommendations are made to the DAFF regarding policy considerations on risk-based regulation as a model of control in South Africa.Keywords: risk-based regulation, abattoir, food control, meat safety
Procedia PDF Downloads 31539127 Backstepping Design and Fractional Differential Equation of Chaotic System
Authors: Ayub Khan, Net Ram Garg, Geeta Jain
Abstract:
In this paper, backstepping method is proposed to synchronize two fractional-order systems. The simulation results show that this method can effectively synchronize two chaotic systems.Keywords: backstepping method, fractional order, synchronization, chaotic system
Procedia PDF Downloads 45839126 Software Component Identification from Its Object-Oriented Code: Graph Metrics Based Approach
Authors: Manel Brichni, Abdelhak-Djamel Seriai
Abstract:
Systems are increasingly complex. To reduce their complexity, an abstract view of the system can simplify its development. To overcome this problem, we propose a method to decompose systems into subsystems while reducing their coupling. These subsystems represent components. Consisting of an existing object-oriented systems, the main idea of our approach is based on modelling as graphs all entities of an oriented object source code. Such modelling is easy to handle, so we can apply restructuring algorithms based on graph metrics. The particularity of our approach consists in integrating in addition to standard metrics, such as coupling and cohesion, some graph metrics giving more precision during the components identication. To treat this problem, we relied on the ROMANTIC approach that proposed a component-based software architecture recovery from an object oriented system.Keywords: software reengineering, software component and interfaces, metrics, graphs
Procedia PDF Downloads 50139125 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 377