Search results for: action based method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 41054

Search results for: action based method

37334 Optimizing Operation of Photovoltaic System Using Neural Network and Fuzzy Logic

Authors: N. Drir, L. Barazane, M. Loudini

Abstract:

It is well known that photovoltaic (PV) cells are an attractive source of energy. Abundant and ubiquitous, this source is one of the important renewable energy sources that have been increasing worldwide year by year. However, in the V-P characteristic curve of GPV, there is a maximum point called the maximum power point (MPP) which depends closely on the variation of atmospheric conditions and the rotation of the earth. In fact, such characteristics outputs are nonlinear and change with variations of temperature and irradiation, so we need a controller named maximum power point tracker MPPT to extract the maximum power at the terminals of photovoltaic generator. In this context, the authors propose here to study the modeling of a photovoltaic system and to find an appropriate method for optimizing the operation of the PV generator using two intelligent controllers respectively to track this point. The first one is based on artificial neural networks and the second on fuzzy logic. After the conception and the integration of each controller in the global process, the performances are examined and compared through a series of simulation. These two controller have prove by their results good tracking of the MPPT compare with the other method which are proposed up to now.

Keywords: maximum power point tracking, neural networks, photovoltaic, P&O

Procedia PDF Downloads 339
37333 Permissible Horizontal Displacements during the Construction of Vertical Shafts in Soft Soils at the Valley of Mexico: Case History

Authors: Joel M. De La Rosa R.

Abstract:

In this paper, the results obtained when monitoring the horizontal deformations of the soil mass are detailed, during each of the construction stages of several vertical shafts located in the soft soils of the Valley of Mexico, by means of the flotation method. From the analysis of these results, the magnitude and percentage relationship with respect to the diameter and depth of excavation of the horizontal deformations that occurred during the monitoring period is established. Based on the horizontal deformation monitoring system and the information provided by the supervisor's site log, the construction stages that have the greatest impact on deformations are established. Additionally, an analysis of the deformations is carried out, which takes into account the resistance and deformability characteristics of the excavated soils, as well as the prevailing hydraulic conditions. This work will allow construction engineers and institutions in charge of infrastructure works in the Valley of Mexico to establish permissible ranges for horizontal deformations that can occur in very soft and saturated soils, during the different construction stages; improving response protocols to potentially dangerous behaviors.

Keywords: vertical shaft, flotation method, very soft clays, construction supervision

Procedia PDF Downloads 189
37332 A Dose Distribution Approach Using Monte Carlo Simulation in Dosimetric Accuracy Calculation for Treating the Lung Tumor

Authors: Md Abdullah Al Mashud, M. Tariquzzaman, M. Jahangir Alam, Tapan Kumar Godder, M. Mahbubur Rahman

Abstract:

This paper presents a Monte Carlo (MC) method-based dose distributions on lung tumor for 6 MV photon beam to improve the dosimetric accuracy for cancer treatment. The polystyrene which is tissue equivalent material to the lung tumor density is used in this research. In the empirical calculations, TRS-398 formalism of IAEA has been used, and the setup was made according to the ICRU recommendations. The research outcomes were compared with the state-of-the-art experimental results. From the experimental results, it is observed that the proposed based approach provides more accurate results and improves the accuracy than the existing approaches. The average %variation between measured and TPS simulated values was obtained 1.337±0.531, which shows a substantial improvement comparing with the state-of-the-art technology.

Keywords: lung tumour, Monte Carlo, polystyrene, Elekta synergy, Monaco planning system

Procedia PDF Downloads 445
37331 Evaluation of Fusion Sonar and Stereo Camera System for 3D Reconstruction of Underwater Archaeological Object

Authors: Yadpiroon Onmek, Jean Triboulet, Sebastien Druon, Bruno Jouvencel

Abstract:

The objective of this paper is to develop the 3D underwater reconstruction of archaeology object, which is based on the fusion between a sonar system and stereo camera system. The underwater images are obtained from a calibrated camera system. The multiples image pairs are input, and we first solve the problem of image processing by applying the well-known filter, therefore to improve the quality of underwater images. The features of interest between image pairs are selected by well-known methods: a FAST detector and FLANN descriptor. Subsequently, the RANSAC method is applied to reject outlier points. The putative inliers are matched by triangulation to produce the local sparse point clouds in 3D space, using a pinhole camera model and Euclidean distance estimation. The SFM technique is used to carry out the global sparse point clouds. Finally, the ICP method is used to fusion the sonar information with the stereo model. The final 3D models have a précised by measurement comparing with the real object.

Keywords: 3D reconstruction, archaeology, fusion, stereo system, sonar system, underwater

Procedia PDF Downloads 299
37330 Gaussian Mixture Model Based Identification of Arterial Wall Movement for Computation of Distension Waveform

Authors: Ravindra B. Patil, P. Krishnamoorthy, Shriram Sethuraman

Abstract:

This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

Keywords: distension waveform, Gaussian Mixture Model, RF ultrasound, arterial wall movement

Procedia PDF Downloads 506
37329 Sterilization of Potato Explants for in vitro Propagation

Authors: D. R. Masvodza, G. Coetzer, E. van der Watt

Abstract:

Microorganisms usually have a prolific growth nature and may cause major problems on in-vitro cultures. For in vitro propagation to be successful explants need to be sterile. In order to determine the best sterilization method for potato explants cv. Amerthyst, five sterilization methods were applied separately to 24 shoots. The first sterilization method was the use of 20% sodium hypochlorite with 1 ml Tween 20 for 15 minutes. The second, third and fourth sterilization methods were the immersion of explants in 70% ethanol in a beaker for either 30 seconds, 1 minute or 2 minutes, followed by 1% sodium hypochlorite with 1 ml Tween 20 for 5 minutes. For the control treatment, no chemicals were used. Finally, all the explants were rinsed three times with autoclaved distilled water and trimmed to 1-2 cm. Explants were then cultured on MS medium with 0.01 mg L-1 NAA and 0.1 mg L-1 GA3 and supplemented with 2 mg L-1 D-calcium pentothenate. The trial was laid out as a complete randomized design, and each treatment combination was replicated 24 times. At 7, 14 and 21 days after culture, data on explant color, survival, and presence or absence of contamination was recorded. Best results were obtained when 20% sodium hypochlorite was used with 1 ml Tween 20 for 15 minutes which is sterilization method 1. Method 2 was comparable to method 1 when explants were cultured in glass vessels. Explants in glass vessels were significantly less contaminated than explants in polypropylene vessel. Therefore at times, ideal methods for sterilization should be coupled with ideal culture conditions such as good quality culture vessel, rather than the addition of more stringent sterilants.

Keywords: culture containers, explants, sodium hypochlororite, sterilization

Procedia PDF Downloads 332
37328 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
37327 Effect of Slope Angle on Gougerd Landslide Stability in Northwest of Iran

Authors: Akbar Khodavirdizadeh

Abstract:

Gougerd village landslide with area about 150 hectares is located in southwest of Khoy city in northwest of the Iran. This Landslide was commenced more than 21 years and caused some damages in houses like some fissures on walls and some cracks on ground and foundations. The main mechanism of landslide is rotational with the high different of top and foot is about 230 m. The thickness of slide mass based on geoelectrical investigation is about 16m obtained. The upper layer of slope is silty sand and the lower layer of clayey gravel. In this paper, the stability of landslide are analyzed based in static analysis under different groundwater surface conditions and at slope angle changes with limit eqlibrium method and the simplified Bishop method. The results of the 72 stability analysis showed that the slope stability of Gougerd landslide increased with increasing of the groundwater surface depth of slope crown. And especially when decreased of slope angle, the safety facter more than in previous state is increased. The required of safety factor for stability in groundwater surface depth from slope crown equal 14 m and with decreased of slope angle to 3 degree at decrease of groundwater surface depth from slope crown equal 6.5 m obtained. The safety factor in critical conditions under groundwater surface depth from slope crown equal 3.5 m and at decreased of slope angle to 3 degree equal 0.5 m obtained. At groudwater surface depth from slope crown of 3 m, 7 m and 10 m respectively equal to 0.97, 1.19 and 1.33 obtained. At groudwater surface depth from slope crown of 3 m, 7 m and 10 m with decreased of slope angle to 3 degree, respectively equal to 1.27, 1.54 and 1.72 obtained. According to the results of this study, for 1 m of groundwater level decrease, the safety factor increased by 5%, and for 1 degree of reduction of the slope angle, safety factor increased by 15%. And the effect of slope angle on Gougerd landslide stability was felt more than groundwater effect.

Keywords: Gougerd landslide, stability analysis, slope angle, groundwater, Khoy

Procedia PDF Downloads 169
37326 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 139
37325 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 248
37324 Protein and Lipid Extraction from Microalgae with Ultrasound Assisted Osmotic Shock Method

Authors: Nais Pinta Adetya, H. Hadiyanto

Abstract:

Microalgae has a potential to be utilized as food and natural colorant. The microalgae components consists of three main parts, these are lipid, protein, and carbohydrate. Crucial step in producing lipid and protein from microalgae is extraction. Microalgae has high water level (70-90%), it causes drying process of biomass needs much more energy and also has potential to distract lipid and protein from microalgae. Extraction of lipid from wet biomass is able to take place efficiently with cell disruption of microalgae by osmotic shock method. In this study, osmotic shock method was going to be integrated with ultrasound to maximalize the extraction yield of lipid and protein from wet biomass Spirulina sp. with osmotic shock method assisted ultrasound. This study consisted of two steps, these were osmotic shock process toward wet biomass and ultrasound extraction assisted. NaCl solution was used as osmotic agent, with the variation of concentrations were 10%, 20%, and 30%. Extraction was conducted in 40°C for 20 minutes with frequency of ultrasound wave was 40kHz. The optimal yield of protein (2.7%) and (lipid 38%) were achieved at 20% osmotic agent concentration.

Keywords: extraction, lipid, osmotic shock, protein, ultrasound

Procedia PDF Downloads 359
37323 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 463
37322 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 79
37321 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 313
37320 A Historical Analysis of The Concept of Equivalence from Different Theoretical Perspectives in Translation Studies

Authors: Amenador Kate Benedicta, Wang Zhiwei

Abstract:

Since the later parts of the 20th century, the notion of equivalence continues to be a central and critical concept in the development of translation theory. After decades of arguments over word-for-word and free translations methods, scholars attempting to develop more systematic and efficient translation theories began to focus on fundamental translation concepts such as equivalence. Although the concept of equivalence has piqued the interest of many scholars, its definition, scope, and applicability have sparked contentious arguments within the discipline. As a result, several distinct theories and explanations on the concept of equivalence have been put forward over the last half-century. Thus, this study explores and discusses the evolution of the critical concept of equivalence in translation studies through a bibliometric method of investigation of manual and digital books and articles by analyzing different scholars' key contributions and limitations on equivalence from various theoretical perspectives. While analyzing them, emphasis is placed on the innovations that each theory has brought to the comprehension of equivalence. In order to achieve the aim of the study, the article began by discussing the contributions of linguistically motivated theories to the notion of equivalence in translation, followed by functionalist-oriented contributions, before moving on to more recent advancements in translation studies on the concept. Because equivalence is such a broad notion, it is impossible to discuss each researcher in depth. As a result, the most well-known names and their equivalent theories are compared and contrasted in this research. The study emphasizes the developmental progression in our comprehension of the equivalence concept and equivalent effect. It concluded that the various theoretical perspective's contributions to the notion of equivalence rather complement and make up for the limitations of each other. The study also highlighted how troublesome the equivalent concept might become in terms of identifying the nature of translation and how central and unavoidable the concept is in every translation action, despite its limitations. The significance of the study lies in its synthesis of the different contributions and limitations of the various theories offered by scholars on the notion of equivalence, lending literature to both student and scholars in the field, and providing insight on future theoretical development

Keywords: equivalence, functionalist translation theories, linguistic translation approaches, translation theories, Skopos

Procedia PDF Downloads 113
37319 Parameterized Lyapunov Function Based Robust Diagonal Dominance Pre-Compensator Design for Linear Parameter Varying Model

Authors: Xiaobao Han, Huacong Li, Jia Li

Abstract:

For dynamic decoupling of linear parameter varying system, a robust dominance pre-compensator design method is given. The parameterized pre-compensator design problem is converted into optimal problem constrained with parameterized linear matrix inequalities (PLMI); To solve this problem, firstly, this optimization problem is equivalently transformed into a new form with elimination of coupling relationship between parameterized Lyapunov function (PLF) and pre-compensator. Then the problem was reduced to a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a newly constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator was achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation of a turbofan engine PLPV model.

Keywords: linear parameter varying (LPV), parameterized Lyapunov function (PLF), linear matrix inequalities (LMI), diagonal dominance pre-compensator

Procedia PDF Downloads 399
37318 Incorporating Multiple Supervised Learning Algorithms for Effective Intrusion Detection

Authors: Umar Albalawi, Sang C. Suh, Jinoh Kim

Abstract:

As internet continues to expand its usage with an enormous number of applications, cyber-threats have significantly increased accordingly. Thus, accurate detection of malicious traffic in a timely manner is a critical concern in today’s Internet for security. One approach for intrusion detection is to use Machine Learning (ML) techniques. Several methods based on ML algorithms have been introduced over the past years, but they are largely limited in terms of detection accuracy and/or time and space complexity to run. In this work, we present a novel method for intrusion detection that incorporates a set of supervised learning algorithms. The proposed technique provides high accuracy and outperforms existing techniques that simply utilizes a single learning method. In addition, our technique relies on partial flow information (rather than full information) for detection, and thus, it is light-weight and desirable for online operations with the property of early identification. With the mid-Atlantic CCDC intrusion dataset publicly available, we show that our proposed technique yields a high degree of detection rate over 99% with a very low false alarm rate (0.4%).

Keywords: intrusion detection, supervised learning, traffic classification, computer networks

Procedia PDF Downloads 350
37317 A High-Resolution Refractive Index Sensor Based on a Magnetic Photonic Crystal

Authors: Ti-An Tsai, Chun-Chih Wang, Hung-Wen Wang, I-Ling Chang, Lien-Wen Chen

Abstract:

In this study, we demonstrate a high-resolution refractive index sensor based on a magnetic photonic crystal (MPC) composed of a triangular lattice array of air holes embedded in Si matrix. A microcavity is created by changing the radius of an air hole in the middle of the photonic crystal. The cavity filled with gyrotropic materials can serve as a refractive index sensor. The shift of the resonant frequency of the sensor is obtained numerically using finite difference time domain method under different ambient conditions having refractive index from n = 1.0 to n = 1.1. The numerical results show that a tiny change in refractive index of Δn = 0.0001 is distinguishable. In addition, the spectral response of the MPC sensor is studied while an external magnetic field is present. The results show that the MPC sensor exhibits a dramatic improvement in resolution.

Keywords: magnetic photonic crystal, refractive index sensor, sensitivity, high-resolution

Procedia PDF Downloads 591
37316 Geospatial Modeling of Dry Snow Avalanches Distribution Using Geographic Information Systems and Remote Sensing: A Case Study of the Šar Mountains (Balkan Peninsula)

Authors: Uroš Durlević, Ivan Novković, Nina Čegar, Stefanija Stojković

Abstract:

Snow avalanches represent one of the most dangerous natural phenomena in mountain regions worldwide. Material and human casualties caused by snow avalanches can be very significant. In this study, using geographic information systems and remote sensing, the natural conditions of the Šar Mountains were analyzed for geospatial modeling of dry slab avalanches. For this purpose, the Fuzzy Analytic Hierarchy Process (FAHP) multi-criteria analysis method was used, within which fifteen environmental criteria were analyzed and evaluated. Based on the existing analyzes and results, it was determined that a significant area of the Šar Mountains is very highly susceptible to the occurrence of dry slab avalanches. The obtained data can be of significant use to local governments, emergency services, and other institutions that deal with natural disasters at the local level. To our best knowledge, this is one of the first research in the Republic of Serbia that uses the FAHP method for geospatial modeling of dry slab avalanches.

Keywords: GIS, FAHP, Šar Mountains, snow avalanches, environmental protection

Procedia PDF Downloads 92
37315 Characterization and Geographical Differentiation of Yellow Prickly Pear Produced in Different Mediterranean Countries

Authors: Artemis Louppis, Michalis Constantinou, Ioanna Kosma, Federica Blando, Michael Kontominas, Anastasia Badeka

Abstract:

The aim of the present study was to differentiate yellow prickly pear according to geographical origin based on the combination of mineral content, physicochemical parameters, vitamins and antioxidants. A total of 240 yellow prickly pear samples from Cyprus, Spain, Italy and Greece were analyzed for pH, titratable acidity, electrical conductivity, protein, moisture, ash, fat, antioxidant activity, individual antioxidants, sugars and vitamins by UPLC-MS/MS as well as minerals by ICP-MS. Statistical treatment of the data included multivariate analysis of variance followed by linear discriminant analysis. Based on results, a correct classification of 66.7% was achieved using the cross validation by mineral content while 86.1% was achieved using the cross validation method by combination of all analytical parameters.

Keywords: geographical differentiation, prickly pear, chemometrics, analytical techniques

Procedia PDF Downloads 143
37314 “It Just Feels Risky”: Intuition vs Evidence in Child Sexual Abuse Cases. Proposing an Empirically Derived Risk and Protection Protocol

Authors: Christian Perrin, Nicholas Blagden, Louise Allen, Sarah Impey

Abstract:

Social workers in the UK and professionals globally are faced with a particular challenge when dealing with allegations of child sexual abuse (CSA) in the community. In the absence of a conviction or incontestable evidence, staff can often find themselves unable to take decisive action to remove a child from harm, even though there may be a credible threat to their welfare. Conversely, practitioners may over-calculate risk through fear of being accountable for harm. This is, in part, due to the absence of a structured and evidence-based risk assessment tool which can predict the likelihood of a person committing child sexual abuse. Such assessments are often conducted by forensic professionals who utilise offence-specific data and personal history information to calculate risk. In situations where only allegations underpin a case, this mode of assessment is not viable. There are further ethical issues surrounding the assessment of risk in this area which require expert consideration and sensitive planning. This paper explores this entangled problem extant in the wider call to prevent sexual and child sexual abuse in the community. To this end, 32 qualitative interviews were undertaken with social workers dealing with CSA cases. Results were analysed using thematic analysis and operationalised to formulate a risk and protection protocol for use in case management. This paper reports on the early findings associated with the initial indications of protocol reliability. Implications for further research and practice are discussed.

Keywords: sexual offending, child sexual offence, offender rehabilitation, risk assessment, offence prevention

Procedia PDF Downloads 109
37313 Investigation on the Properties of Particulate Reinforced AA2014 Metal Matrix Composite Materials Produced by Vacuum Infiltration Method

Authors: Isil Kerti, Onur Okur, Sibel Daglilar, Recep Calin

Abstract:

Particulate reinforced aluminium matrix composites have gained more importance in automotive, aeronautical and defense industries due to their specific properties like as low density, high strength and stiffness, good fatigue strength, dimensional stability at high temperature and acceptable tribological properties. In this study, 2014 Aluminium alloy used as a matrix material and B₄C and SiC were selected as reinforcements components. For production of composites materials, vacuum infiltration method was used. In the experimental studies, the reinforcement volume ratios were defined by mixing as totally 10% B₄C and SiC. Aging treatment (T6) was applied to the specimens. The effect of T6 treatment on hardness was determined by using Brinell hardness test method. The effects of the aging treatment on microstructure and chemical structure were analysed by making XRD, SEM and EDS analysis on the specimens.

Keywords: metal matrix composite, vacumm infiltration method, aluminum metal matrix, mechanical feature

Procedia PDF Downloads 315
37312 Discovering Event Outliers for Drug as Commercial Products

Authors: Arunas Burinskas, Aurelija Burinskiene

Abstract:

On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.

Keywords: drugs, Grubbs' test, outlier, shortage event

Procedia PDF Downloads 134
37311 Causes, Consequences, and Alternative Strategies of Illegal Migration in Ethiopia: The Case of Tigray Region

Authors: Muuz Abraha Meshesha

Abstract:

Illegal Migration, specifically Trafficking in person is one of the primary issues of the day affecting all states of the world with variation on the extent of the root causes and consequences that led people to migrate irregularly and the consequences it is costing on humanity. This paper intends to investigate the root causes and consequences of illegal migration in Ethiopia’s Tigray Regional state and come up with alternative intervening strategy. To come up with pertinent and robust research finding, this study employed mixed research approach involving qualitative and quantitative data in line with purposive and snow ball sampling selection technique. The study revealed that, though poverty is the most commonly sensed pushing factor for people to illegally migrate, the issue of psycho-social orientation and attitudinal immersion of the local community for illegal migration, both in thinking and action is the most pressing problem that urges serious intervention. Trafficking in persons and Illegal migration in general, is becoming the norm of the day in the study area that overtly reveal illegal migration is an issue beyond livelihood securing demand in practice. Basically, parties engaged in illegal migration and the accomplice with human traffickers these days in the study area are found to be more than urgency for food security and a need to escape from livelihood impoverishment. Therefore, this study come up with a new paradigm insight indicating that illegal migration is believed by the local community members as an optional path way of doing business in illegal way while the attitude of the community and officials authorized to regulate is being part of the channel or to the least tolerant of this grave global danger. The study also found that the effect of illegal migration is significantly manifested in long run than in short term periods. Therefore, a need for critical consideration on attitudinal based intervention and youth oriented and enforceable legal and policy framework accountability framework is required to face and control illegal migration by international, national, local stakeholders. Besides this, economy based development interventions that could engage and reorient the youth, as primary victims of trafficking, and expansion of large scale projects that can employ large number of youths at a time.

Keywords: human traficking, illegal migration, migration, tigray region

Procedia PDF Downloads 65
37310 PET/CT Patient Dosage Assay

Authors: Gulten Yilmaz, A. Beril Tugrul, Mustafa Demir, Dogan Yasar, Bayram Demir, Bulent Buyuk

Abstract:

A Positron Emission Tomography (PET) is a radioisotope imaging technique that illustrates the organs and the metabolisms of the human body. This technique is based on the simultaneous detection of 511 keV annihilation photons, annihilated as a result of electrons annihilating positrons that radiate from positron-emitting radioisotopes that enter biological active molecules in the body. This study was conducted on ten patients in an effort to conduct patient-related experimental studies. Dosage monitoring for the bladder, which was the organ that received the highest dose during PET applications, was conducted for 24 hours. Assessment based on measuring urination activities after injecting patients was also a part of this study. The MIRD method was used to conduct dosage calculations for results obtained from experimental studies. Results obtained experimentally and theoretically were assessed comparatively.

Keywords: PET/CT, TLD, MIRD, dose measurement, patient doses

Procedia PDF Downloads 521
37309 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 314
37308 Conceptual Synthesis as a Platform for Psychotherapy Integration: The Case of Transference and Overgeneralization

Authors: Merav Rabinovich

Abstract:

Background: Psychoanalytic and cognitive therapy attend problems from a different point of view. At the recent decade the integrating movement gaining momentum. However only little has been studied regarding the theoretical interrelationship among these therapy approaches. Method: 33 transference case-studies that were published in peer-reviewed academic journals were coded by Luborsky's Core Conflictual Relationship Theme (CCRT) method (components of wish, response from other – real or imaginal - and the response of self). CCRT analysis was conducted through tailor-made method, a valid tool to identify transference patterns. Rabinovich and Kacen's (2010, 2013) Relationship Between Categories (RBC) method was used to analyze the relationship among these transference patterns with cognitive and behavior components appearing at those psychoanalytic case-studies. Result: 30 of 33 cases (90%) were found to connect the transference themes with cognitive overgeneralization. In these cases, overgeneralizations were organized around Luborsky's transference themes of response from other and response of self. Additionally, overgeneralization was found to be an antithesis of the wish component, and the tension between them found to be linked with powerful behavioral and emotional reactions. Conclusion: The findings indicate that thinking distortions of overgeneralization (cognitive therapy) are the actual expressions of transference patterns. These findings point to a theoretical junction, a platform for clinical integration. Awareness to this junction can help therapists to promote well psychotherapy outcomes relying on the accumulative wisdom of the different therapies.

Keywords: transference, overgeneralization, theoretical integration, case-study metasynthesis, CCRT method, RBC method

Procedia PDF Downloads 142
37307 Analysis of the Behavior of the Structure Under Internal Anfo Explosion

Authors: Seung-Min Ko, Seung-Jai Choi, Gun Jung, Jang-Ho Jay Kim

Abstract:

Although extensive explosion-related research has been performed in the past several decades, almost no research has focused on internal blasts. However, internal blast research is needed to understand about the behavior of a containment structure or building under internal blast loading, as in the case of the Chornobyl and Fukushima nuclear accidents. Therefore, the internal blast study concentrated on RC and PSC structures is performed. The test data obtained from reinforced concrete (RC) and prestressed concrete (PSC) tubular structures applied with an internal explosion using ammonium nitrate/fuel oil (ANFO) charge are used to assess their deformation resistance and ultimate failure load based on the structural stiffness change under various charge weight. For the internal blast charge weight, ANFO explosive charge weights of 15.88, 20.41, 22.68 and 24.95 kg were selected for the RC tubular structures, and 22.68, 24.95, 27.22, 29.48, and 31.75 kg were selected for PSC tubular structures, which were detonated at the center of cross section at the mid-span with a standoff distance of 1,000mm to the inner wall surface. Then, the test data were used to predict the internal charge weight required to fail a real scale reinforced concrete containment vessels (RCCV) and prestressed concrete containment vessel (PCCV). Then, the analytical results based on the experimental data were derived using the simple assumptions of the models, and another approach using the stiffness, deformation and explosion weight relationship was used to formulate a general method for analyzing internal blasted tubular structures. A model of the internal explosion of a steel tube was used as an example for validation. The proposed method can be used generically, using factors according to the material characteristics of the target structures. The results of the study are discussed in detail in the paper.

Keywords: internal blast, reinforced concrete, RCCV, PCCV, stiffness, blast safety

Procedia PDF Downloads 79
37306 A New Approach for Assertions Processing during Assertion-Based Software Testing

Authors: Ali M. Alakeel

Abstract:

Assertion-based software testing has been shown to be a promising tool for generating test cases that reveal program faults. Because the number of assertions may be very large for industry-size programs, one of the main concerns to the applicability of assertion-based testing is the amount of search time required to explore a large number of assertions. This paper presents a new approach for assertions exploration during the process of Assertion-Based software testing. Our initial exterminations with the proposed approach show that the performance of Assertion-Based testing may be improved, therefore, making this approach more efficient when applied on programs with large number of assertions.

Keywords: software testing, assertion-based testing, program assertions, generating test

Procedia PDF Downloads 460
37305 The Importance of Developing Pedagogical Agency Capacities in Initial Teacher Formation: A Critical Approach to Advance in Social Justice

Authors: Priscilla Echeverria

Abstract:

This paper addresses initial teacher formation as a formative space in which pedagogy students develop a pedagogical agency capacity to contribute to social justice, considering ethical, political, and epistemic dimensions. This paper is structured by discussing first the concepts of agency, pedagogical interaction, and social justice from a critical perspective; and continues offering preliminary results on the capacity of pedagogical agency in novice teachers after the analysis of critical incidents as a research methodology. This study is motivated by the concern that responding to the current neoliberal scenario, many initial teacher formation (ITF) programs have reduced the meaning of education to instruction, and pedagogy to methodology, favouring the formation of a technical professional over a reflective or critical one. From this concern, this study proposes that the restitution of the subject is an urgent task in teacher formation, so it is essential to enable him in his capacity for action and advance in eliminating institutionalized oppression insofar as it affects that capacity. Given that oppression takes place in human interaction, through this work, I propose that initial teacher formation develops sensitivity and educates the gaze to identify oppression and take action against it, both in pedagogical interactions -which configure political, ethical, and epistemic subjectivities- as in the hidden and official curriculum. All this from the premise that modelling democratic and dialogical interactions are basic for any program that seeks to contribute to a more just and empowered society. The contribution of this study lies in the fact that it opens a discussion in an area about which we know little: the impact of the type of interactions offered by university teaching at ITF on the capacity of future teachers to be pedagogical agents. For this reason, this study seeks to gather evidence of the result of this formation, analysing the capacity of pedagogical agency of novice teachers, or, in other words, how capable the graduates of secondary pedagogies are in their first pedagogical experiences to act and make decisions putting the formative purposes that they are capable of autonomously defining before technical or bureaucratic issues imposed by the curriculum or the official culture. This discussion is part of my doctoral research, "The importance of developing the capacity for ethical-political-epistemic agency in novice teachers during initial teacher formation to contribute to social justice", which I am currently developing in the Educational Research program of the University of Lancaster, United Kingdom, as a Conicyt fellow for the 2019 cohort.

Keywords: initial teacher formation, pedagogical agency, pedagogical interaction, social justice, hidden curriculum

Procedia PDF Downloads 97