Search results for: dimensional affect prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7731

Search results for: dimensional affect prediction

5601 COSMO-RS Prediction for Choline Chloride/Urea Based Deep Eutectic Solvent: Chemical Structure and Application as Agent for Natural Gas Dehydration

Authors: Tayeb Aissaoui, Inas M. AlNashef

Abstract:

In recent years, green solvents named deep eutectic solvents (DESs) have been found to possess significant properties and to be applicable in several technologies. Choline chloride (ChCl) mixed with urea at a ratio of 1:2 and 80 °C was the first discovered DES. In this article, chemical structure and combination mechanism of ChCl: urea based DES were investigated. Moreover, the implementation of this DES in water removal from natural gas was reported. Dehydration of natural gas by ChCl:urea shows significant absorption efficiency compared to triethylene glycol. All above operations were retrieved from COSMOthermX software. This article confirms the potential application of DESs in gas industry.

Keywords: COSMO-RS, deep eutectic solvents, dehydration, natural gas, structure, organic salt

Procedia PDF Downloads 282
5600 Drying Modeling of Banana Using Cellular Automata

Authors: M. Fathi, Z. Farhaninejad, M. Shahedi, M. Sadeghi

Abstract:

Drying is one of the oldest preservation methods for food and agriculture products. Appropriate control of operation can be obtained by modeling. Limitation of continues models for complex boundary condition and non-regular geometries leading to appearance of discrete novel methods such as cellular automata, which provides a platform for obtaining fast predictions by rule-based mathematics. In this research a one D dimensional CA was used for simulating thin layer drying of banana. Banana slices were dried with a convectional air dryer and experimental data were recorded for validating of final model. The model was programmed by MATLAB, run for 70000 iterations and von-Neumann neighborhood. The validation results showed a good accordance between experimental and predicted data (R=0.99). Cellular automata are capable to reproduce the expected pattern of drying and have a powerful potential for solving physical problems with reasonable accuracy and low calculating resources.

Keywords: banana, cellular automata, drying, modeling

Procedia PDF Downloads 432
5599 Open-Ended Multi-Modal Relational Reason for Video Question Answering

Authors: Haozheng Luo, Ruiyang Qin

Abstract:

People with visual impairments urgently need assistance, not only on the fundamental tasks such as guiding and retrieving objects but on the advanced like picturing the new environments. More than a guiding dog, they might want such devices that can provide linguistic interaction. Building on this idea, we aim to study the interaction between the robot agent and visually impaired people. In our research, we are going to develop a robot agent that will be able to analyze the test environment and answer the participants’ questions. We also will study the relevant issues regarding the interaction between human beings and the robot agents to figure out which and how the factors will affect the interaction.

Keywords: HRI, video question answering, visual question answering, natural language processing

Procedia PDF Downloads 211
5598 Evaluating Service Trustworthiness for Service Selection in Cloud Environment

Authors: Maryam Amiri, Leyli Mohammad-Khanli

Abstract:

Cloud computing is becoming increasingly popular and more business applications are moving to cloud. In this regard, services that provide similar functional properties are increasing. So, the ability to select a service with the best non-functional properties, corresponding to the user preference, is necessary for the user. This paper presents an Evaluation Framework of Service Trustworthiness (EFST) that evaluates the trustworthiness of equivalent services without need to additional invocations of them. EFST extracts user preference automatically. Then, it assesses trustworthiness of services in two dimensions of qualitative and quantitative metrics based on the experiences of past usage of services. Finally, EFST determines the overall trustworthiness of services using Fuzzy Inference System (FIS). The results of experiments and simulations show that EFST is able to predict the missing values of Quality of Service (QoS) better than other competing approaches. Also, it propels users to select the most appropriate services.

Keywords: user preference, cloud service, trustworthiness, QoS metrics, prediction

Procedia PDF Downloads 283
5597 Mathematical Modeling and Optimization of Burnishing Parameters for 15NiCr6 Steel

Authors: Tarek Litim, Ouahiba Taamallah

Abstract:

The present paper is an investigation of the effect of burnishing on the surface integrity of a component made of 15NiCr6 steel. This work shows a statistical study based on regression, and Taguchi's design has allowed the development of mathematical models to predict the output responses as a function of the technological parameters studied. The response surface methodology (RSM) showed a simultaneous influence of the burnishing parameters and observe the optimal processing parameters. ANOVA analysis of the results resulted in the validation of the prediction model with a determination coefficient R=90.60% and 92.41% for roughness and hardness, respectively. Furthermore, a multi-objective optimization allowed to identify a regime characterized by P=10kgf, i=3passes, and f=0.074mm/rev, which favours minimum roughness and maximum hardness. The result was validated by the desirability of D= (0.99 and 0.95) for roughness and hardness, respectively.

Keywords: 15NiCr6 steel, burnishing, surface integrity, Taguchi, RSM, ANOVA

Procedia PDF Downloads 187
5596 An Approach for Thermal Resistance Prediction of Plain Socks in Wet State

Authors: Tariq Mansoor, Lubos Hes, Vladimir Bajzik

Abstract:

Socks comfort has great significance in our daily life. This significance even increased when we have undergone a work of low or high activity. It causes the sweating of our body with different rates. In this study, plain socks with differential fibre composition were wetted to saturated level. Then after successive intervals of conditioning, these socks are characterized by thermal resistance in dry and wet states. Theoretical thermal resistance is predicted by using combined filling coefficients and thermal conductivity of wet polymers instead of dry polymer (fibre) in different models. By this modification, different mathematical models could predict thermal resistance at different moisture levels. Furthermore, predicted thermal resistance by different models has reasonable correlation range between (0.84 -0.98) with experimental results in both dry (lab conditions moisture) and wet states. "This work is supported by Technical University of Liberec under SGC-2019. Project number is 21314".

Keywords: thermal resistance, mathematical model, plain socks, moisture loss rate

Procedia PDF Downloads 188
5595 Air Flows along Perforated Metal Plates with the Heat Transfer

Authors: Karel Frana, Sylvio Simon

Abstract:

The objective of the paper is a numerical study of heat transfer between perforated metal plates and the surrounding air flows. Different perforation structures can nowadays be found in various industrial products. Besides improving the mechanical properties, the perforations can intensify the heat transfer as well. The heat transfer coefficient depends on a wide range of parameters such as type of perforation, size, shape, flow properties of the surrounding air etc. The paper was focused on three different perforation structures which have been investigated from the point of the view of the production in the previous studies. To determine the heat coefficients and the Nusselt numbers, the numerical simulation approach was adopted. The calculations were performed using the OpenFOAM software. The three-dimensional, unstable, turbulent and incompressible air flow around the perforated surface metal plate was considered.

Keywords: perforations, convective heat transfers, turbulent flows, numerical simulations

Procedia PDF Downloads 573
5594 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 115
5593 Narrative Research in Secondary Teacher Education: Examining the Self-Efficacy of Content Area Teacher Candidates

Authors: Tiffany Karalis Noel

Abstract:

The purpose of this study was to examine the factors attributed to the self-efficacy of beginning secondary content area teachers as they moved through their student teaching experiences. This study used a narrative inquiry methodology to understand the variables attributed to teacher self-efficacy among a group of secondary content area teacher candidates. The primary purpose of using a narrative inquiry methodology was to share the stories of content area teacher candidates’ student teaching experiences. Focused research questions included: (1) To what extent does teacher education preparation affect the self-efficacy of beginning content area teachers? (2) Which recurrent elements of teacher education affect the self-efficacy of beginning teachers, regardless of content area? (3) How do the findings from research questions 1 and 2 inform teacher educators? The findings of this study suggest that teacher education preparation affects the self-efficacy of beginning secondary teacher candidates across the content areas; accordingly, the findings of this study provide insight for teacher educators to consider the areas where teacher education programs are failing to provide adequate preparation. These teacher candidates emphasized the value of adequate preparation throughout their teacher education programs to help inform their student teaching experiences. In order to feel effective and successful as beginning teachers, these teacher candidates required additional opportunities to apply the practical application of their teaching skills prior to the student teaching experience, the incorporation of classroom management strategy coursework into their curriculum, and opportunities to explore the extensive demands of the teaching profession ranging from time management to dealing with difficult parents, to name a few referenced examples. The teacher candidates experienced feelings of self-doubt related to their effectiveness as teachers when they were unable to employ successful classroom management strategies, pedagogical techniques, or even feel confidence in navigating challenging conversations with students, parents, and/or administrators. In order to help future teacher candidates and beginning teachers in general overcome these barriers, additional coursework, fieldwork, and practical application experiences should be provided in teacher education programs to help boost the self-efficacy of student teachers.

Keywords: self-efficacy, teacher efficacy, secondary preservice teacher education, teacher candidacy, student teaching

Procedia PDF Downloads 147
5592 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley

Authors: Sajana Suwal, Ganesh R. Nhemafuki

Abstract:

Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.

Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response

Procedia PDF Downloads 287
5591 Case-Based Reasoning: A Hybrid Classification Model Improved with an Expert's Knowledge for High-Dimensional Problems

Authors: Bruno Trstenjak, Dzenana Donko

Abstract:

Data mining and classification of objects is the process of data analysis, using various machine learning techniques, which is used today in various fields of research. This paper presents a concept of hybrid classification model improved with the expert knowledge. The hybrid model in its algorithm has integrated several machine learning techniques (Information Gain, K-means, and Case-Based Reasoning) and the expert’s knowledge into one. The knowledge of experts is used to determine the importance of features. The paper presents the model algorithm and the results of the case study in which the emphasis was put on achieving the maximum classification accuracy without reducing the number of features.

Keywords: case based reasoning, classification, expert's knowledge, hybrid model

Procedia PDF Downloads 364
5590 Establishing Multi-Leveled Computability as a Living-System Evolutionary Context

Authors: Ron Cottam, Nils Langloh, Willy Ranson, Roger Vounckx

Abstract:

We start by formally describing the requirements for environmental-reaction survival computation in a natural temporally-demanding medium, and develop this into a more general model of the evolutionary context as a computational machine. The effect of this development is to replace deterministic logic by a modified form which exhibits a continuous range of dimensional fractal diffuseness between the isolation of perfectly ordered localization and the extended communication associated with nonlocality as represented by pure causal chaos. We investigate the appearance of life and consciousness in the derived general model, and propose a representation of Nature within which all localizations have the character of quasi-quantal entities. We compare our conclusions with Heisenberg’s uncertainty principle and nonlocal teleportation, and maintain that computability is the principal influence on evolution in the model we propose.

Keywords: computability, evolution, life, localization, modeling, nonlocality

Procedia PDF Downloads 395
5589 Youth and Radicalization: Main Causes Who Lead Young People to Radicalize in a Context with Background of Radicalization

Authors: Zineb Emrane

Abstract:

This abstract addresses the issue of radicalization of young people in a context with background of radicalization, in North of Morocco, 5 terrorist of Madrid's Attacts on 11th March, were coming from this context. It were developed a study pilot that describing young people perception about the main causes that lead and motivate for radicalization. Whenever we talk about this topic, we obtain information from studies and investigations by specialists in field, but we don’t give voice to the protagonists who in many cases are victims, specifically, young people at social risk because of social factors. Extremist radicalization is an expanding phenomenon, that affect young people, in north of Morocco. They live in a context with radical background and at risk of social exclusion, their social, economic and familiar needs make them vulnerable. The extremist groups take advantage of this vulnerability to involve them in a process of radicalization, offering them an alternative environment where they can found all they are looking for. This study pilot approaches the main causes that lead and motivates young people to become radicals, analyzing their context with emphasis on influencing factors, and bearing in mind the analysis of young people about how the radical background affect them and their opinion this phenomenon. The pilot study was carried out through the following actions: - Group dynamics with young people to analyze the process of violent radicalization of young people. -A participatory workshop with members of organizations that work directly with young people at risk of radicalization. -Interviews with institutional managers -Participant observation. The implementation of actions has led to the conclusion that young people define violent radicalization as a sequential process, depending on the stage, it can be deconstructed. Young people recognize that they stop feeling belonging to their family, school and neighborhood when they see behavior contrary to what they consider good and evil. The emotional rupture and the search for references outside their circle, push them to sympathize with groups that have an extremist ideology and that offer them what they need. The radicalization is a process with different stages, the main causes and the factors which lead young people to use extremist violence are related their low level of belonging feeling to their context, and lack of critical thinking about important issues. The young people are in a vulnerable stage, searching their identity, a space in which they can be accepted, and when they don't find it they are easily manipulated and susceptible to being attracted by extremist groups.

Keywords: exclusion, radicalization, vulnerability, youth

Procedia PDF Downloads 157
5588 Productivity and Household Welfare Impact of Technology Adoption: A Microeconometric Analysis

Authors: Tigist Mekonnen Melesse

Abstract:

Since rural households are basically entitled to food through own production, improving productivity might lead to enhance the welfare of rural population through higher food availability at the household level and lowering the price of agricultural products. Increasing agricultural productivity through the use of improved technology is one of the desired outcomes from sensible food security and agricultural policy. The ultimate objective of this study was to evaluate the potential impact of improved agricultural technology adoption on smallholders’ crop productivity and welfare. The study is conducted in Ethiopia covering 1500 rural households drawn from four regions and 15 rural villages based on data collected by Ethiopian Rural Household Survey. Endogenous treatment effect model is employed in order to account for the selection bias on adoption decision that is expected from the self-selection of households in technology adoption. The treatment indicator, technology adoption is a binary variable indicating whether the household used improved seeds and chemical fertilizer or not. The outcome variables were cereal crop productivity, measured in real value of production and welfare of households, measured in real per capita consumption expenditure. Results of the analysis indicate that there is positive and significant effect of improved technology use on rural households’ crop productivity and welfare in Ethiopia. Adoption of improved seeds and chemical fertilizer alone will increase the crop productivity by 7.38 and 6.32 percent per year of each. Adoption of such technologies is also found to improve households’ welfare by 1.17 and 0.25 percent per month of each. The combined effect of both technologies when adopted jointly is increasing crop productivity by 5.82 percent and improving welfare by 0.42 percent. Besides, educational level of household head, farm size, labor use, participation in extension program, expenditure for input and number of oxen positively affect crop productivity and household welfare, while large household size negatively affect welfare of households. In our estimation, the average treatment effect of technology adoption (average treatment effect on the treated, ATET) is the same as the average treatment effect (ATE). This implies that the average predicted outcome for the treatment group is similar to the average predicted outcome for the whole population.

Keywords: Endogenous treatment effect, technologies, productivity, welfare, Ethiopia

Procedia PDF Downloads 644
5587 The Implementation of a Numerical Technique to Thermal Design of Fluidized Bed Cooler

Authors: Damiaa Saad Khudor

Abstract:

The paper describes an investigation for the thermal design of a fluidized bed cooler and prediction of heat transfer rate among the media categories. It is devoted to the thermal design of such equipment and their application in the industrial fields. It outlines the strategy for the fluidization heat transfer mode and its implementation in industry. The thermal design for fluidized bed cooler is used to furnish a complete design for a fluidized bed cooler of Sodium Bicarbonate. The total thermal load distribution between the air-solid and water-solid along the cooler is calculated according to the thermal equilibrium. The step by step technique was used to accomplish the thermal design of the fluidized bed cooler. It predicts the load, air, solid and water temperature along the trough. The thermal design for fluidized bed cooler revealed to the installation of a heat exchanger consists of (65) horizontal tubes with (33.4) mm diameter and (4) m length inside the bed trough.

Keywords: fluidization, powder technology, thermal design, heat exchangers

Procedia PDF Downloads 506
5586 Biological Hotspots in the Galápagos Islands: Exploring Seasonal Trends of Ocean Climate Drivers to Monitor Algal Blooms

Authors: Emily Kislik, Gabriel Mantilla Saltos, Gladys Torres, Mercy Borbor-Córdova

Abstract:

The Galápagos Marine Reserve (GMR) is an internationally-recognized region of consistent upwelling events, high productivity, and rich biodiversity. Despite its high-nutrient, low-chlorophyll condition, the archipelago has experienced phytoplankton blooms, especially in the western section between Isabela and Fernandina Islands. However, little is known about how climate variability will affect future phytoplankton standing stock in the Galápagos, and no consistent protocols currently exist to quantify phytoplankton biomass, identify species, or monitor for potential harmful algal blooms (HABs) within the archipelago. This analysis investigates physical, chemical, and biological oceanic variables that contribute to algal blooms within the GMR, using 4 km Aqua MODIS satellite imagery and 0.125-degree wind stress data from January 2003 to December 2016. Furthermore, this study analyzes chlorophyll-a concentrations at varying spatial scales— within the greater archipelago, as well as within five smaller bioregions based on species biodiversity in the GMR. Seasonal and interannual trend analyses, correlations, and hotspot identification were performed. Results demonstrate that chlorophyll-a is expressed in two seasons throughout the year in the GMR, most frequently in September and March, with a notable hotspot in the Elizabeth Bay bioregion. Interannual chlorophyll-a trend analyses revealed highest peaks in 2003, 2007, 2013, and 2016, and variables that correlate highly with chlorophyll-a include surface temperature and particulate organic carbon. This study recommends future in situ sampling locations for phytoplankton monitoring, including the Elizabeth Bay bioregion. Conclusions from this study contribute to the knowledge of oceanic drivers that catalyze primary productivity and consequently affect species biodiversity within the GMR. Additionally, this research can inform policy and decision-making strategies for species conservation and management within bioregions of the Galápagos.

Keywords: bioregions, ecological monitoring, phytoplankton, remote sensing

Procedia PDF Downloads 259
5585 Comparison of ANN and Finite Element Model for the Prediction of Ultimate Load of Thin-Walled Steel Perforated Sections in Compression

Authors: Zhi-Jun Lu, Qi Lu, Meng Wu, Qian Xiang, Jun Gu

Abstract:

The analysis of perforated steel members is a 3D problem in nature, therefore the traditional analytical expressions for the ultimate load of thin-walled steel sections cannot be used for the perforated steel member design. In this study, finite element method (FEM) and artificial neural network (ANN) were used to simulate the process of stub column tests based on specific codes. Results show that compared with those of the FEM model, the ultimate load predictions obtained from ANN technique were much closer to those obtained from the physical experiments. The ANN model for the solving the hard problem of complex steel perforated sections is very promising.

Keywords: artificial neural network (ANN), finite element method (FEM), perforated sections, thin-walled Steel, ultimate load

Procedia PDF Downloads 344
5584 Simulation of Die Casting Process in an Industrial Helical Gearbox Flange Die

Authors: Mehdi Modabberifar, Behrouz Raad, Bahman Mirzakhani

Abstract:

Flanges are widely used for connecting valves, pipes and other industrial devices such as gearboxes. Method of producing a flange has a considerable impact on the manner of their involvement with the industrial engines and gearboxes. By Using die casting instead of sand casting and machining for manufacturing flanges, production speed and dimensional accuracy of the parts increases. Also, in die casting, obtained dimensions are close to final dimensions and hence the need for machining flanges after die casting process decreases which makes a significant savings in raw materials and improves the mechanical properties of flanges. In this paper, a typical die of an industrial helical gearbox flange (size ISO 50) was designed and die casting process for producing this type of flange was simulated using ProCAST software. The results of simulation were used for optimizing die design. Finally, using the results of the analysis, optimized die was built.

Keywords: die casting, finite element, flange, helical gearbox

Procedia PDF Downloads 360
5583 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 124
5582 A Source Point Distribution Scheme for Wave-Body Interaction Problem

Authors: Aichun Feng, Zhi-Min Chen, Jing Tang Xing

Abstract:

A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.

Keywords: source point distribution, panel method, Rankine source, desingularized algorithm

Procedia PDF Downloads 362
5581 A Meso Macro Model Prediction of Laminated Composite Damage Elastic Behaviour

Authors: A. Hocine, A. Ghouaoula, S. M. Medjdoub, M. Cherifi

Abstract:

The present paper proposed a meso–macro model describing the mechanical behaviour composite laminates of staking sequence [+θ/-θ]s under tensil loading. The behaviour of a layer is ex-pressed through elasticity coupled to damage. The elastic strain is due to the elasticity of the layer and can be modeled by using the classical laminate theory, and the laminate is considered as an orthotropic material. This means that no coupling effect between strain and curvature is considered. In the present work, the damage is associated to cracking of the matrix and parallel to the fibers and it being taken into account by the changes in the stiffness of the layers. The anisotropic damage is completely described by a single scalar variable and its evolution law is specified from the principle of maximum dissipation. The stress/strain relationship is investigated in plane stress loading.

Keywords: damage, behavior modeling, meso-macro model, composite laminate, membrane loading

Procedia PDF Downloads 470
5580 The UAV Feasibility Trajectory Prediction Using Convolution Neural Networks

Authors: Adrien Marque, Daniel Delahaye, Pierre Maréchal, Isabelle Berry

Abstract:

Wind direction and uncertainty are crucial in aircraft or unmanned aerial vehicle trajectories. By computing wind covariance matrices on each spatial grid point, these spatial grids can be defined as images with symmetric positive definite matrix elements. A data pre-processing step, a specific convolution, a specific max-pooling, and a specific flatten layers are implemented to process such images. Then, the neural network is applied to spatial grids, whose elements are wind covariance matrices, to solve classification problems related to the feasibility of unmanned aerial vehicles based on wind direction and wind uncertainty.

Keywords: wind direction, uncertainty level, unmanned aerial vehicle, convolution neural network, SPD matrices

Procedia PDF Downloads 26
5579 Competitivity in Procurement Multi-Unit Discrete Clock Auctions: An Experimental Investigation

Authors: Despina Yiakoumi, Agathe Rouaix

Abstract:

Laboratory experiments were run to investigate the impact of different design characteristics of the auctions, which have been implemented to procure capacity in the UK’s reformed electricity markets. The experiment studies competition among bidders in procurement multi-unit discrete descending clock auctions under different feedback policies and pricing rules. Theory indicates that feedback policy in combination with the two common pricing rules; last-accepted bid (LAB) and first-rejected bid (FRB), could affect significantly the auction outcome. Two information feedback policies regarding the bidding prices of the participants are considered; with feedback and without feedback. With feedback, after each round participants are informed of the number of items still in the auction and without feedback, after each round participants have no information about the aggregate supply. Under LAB, winning bidders receive the amount of the highest successful bid and under the FRB the winning bidders receive the lowest unsuccessful bid. Based on the theoretical predictions of the alternative auction designs, it was decided to run three treatments. First treatment considers LAB with feedback; second treatment studies LAB without feedback; third treatment investigates FRB without feedback. Theoretical predictions of the game showed that under FRB, the alternative feedback policies are indifferent to the auction outcome. Preliminary results indicate that LAB with feedback and FRB without feedback achieve on average higher clearing prices in comparison to the LAB treatment without feedback. However, the clearing prices under LAB with feedback and FRB without feedback are on average lower compared to the theoretical predictions. Although under LAB without feedback theory predicts the clearing price will drop to the competitive equilibrium, experimental results indicate that participants could still engage in cooperative behavior and drive up the price of the auction. It is showed, both theoretically and experimentally, that the pricing rules and the feedback policy, affect the bidding competitiveness of the auction by providing opportunities to participants to engage in cooperative behavior and exercise market power. LAB without feedback seems to be less vulnerable to market power opportunities compared to the alternative auction designs. This could be an argument for the use of LAB pricing rule in combination with limited feedback in the UK capacity market in an attempt to improve affordability for consumers.

Keywords: descending clock auctions, experiments, feedback policy, market design, multi-unit auctions, pricing rules, procurement auctions

Procedia PDF Downloads 292
5578 Evolution of Cord Absorbed Dose during Larynx Cancer Radiotherapy, with 3D Treatment Planning and Tissue Equivalent Phantom

Authors: Mohammad Hassan Heidari, Amir Hossein Goodarzi, Majid Azarniush

Abstract:

Radiation doses to tissues and organs were measured using the anthropomorphic phantom as an equivalent to the human body. When high-energy X-rays are externally applied to treat laryngeal cancer, the absorbed dose at the laryngeal lumen is lower than given dose because of air space which it should pass through before reaching the lesion. Specially in case of high-energy X-rays, the loss of dose is considerable. Three-dimensional absorbed dose distributions have been computed for high-energy photon radiation therapy of laryngeal and hypo pharyngeal cancers, using a coaxial pair of opposing lateral beams in fixed positions. Treatment plans obtained under various conditions of irradiation.

Keywords: 3D treatment planning, anthropomorphic phantom, larynx cancer, radiotherapy

Procedia PDF Downloads 542
5577 A Semantic Analysis of Modal Verbs in Barak Obama’s 2012 Presidential Campaign Speech

Authors: Kais A. Kadhim

Abstract:

This paper is a semantic analysis of the English modals in Obama’s speech. The main objective of this study is to analyze selected modal auxiliaries identified in selected speeches of Obama’s campaign based on Coates’ (1983) semantic clusters. A total of fifteen speeches of Obama’s campaign were selected as the primary data and the modal auxiliaries selected for analysis include will, would, can, could, should, must, ought, shall, may and might. All the modal auxiliaries taken from the speeches of Barack Obama were analyzed based on the framework of Coates’ semantic clusters. Such analytical framework was carried out to examine how modal auxiliaries are used in the context of persuading people in Obama’s campaign speeches. The findings reveal that modals of intention, prediction, futurity and modals of possibility, ability, permission are mostly used in Obama’s campaign speeches.

Keywords: modals, meaning, persuasion, speech

Procedia PDF Downloads 399
5576 Mean Velocity Modeling of Open-Channel Flow with Submerged Vegetation

Authors: Mabrouka Morri, Amel Soualmia, Philippe Belleudy

Abstract:

Vegetation affects the mean and turbulent flow structure. It may increase flood risks and sediment transport. Therefore, it is important to develop analytical approaches for the bed shear stress on vegetated bed, to predict resistance caused by vegetation. In the recent years, experimental and numerical models have both been developed to model the effects of submerged vegetation on open-channel flow. In this paper, different analytic models are compared and tested using the criteria of deviation, to explore their capacity for predicting the mean velocity and select the suitable one that will be applied in real case of rivers. The comparison between the measured data in vegetated flume and simulated mean velocities indicated, a good performance, in the case of rigid vegetation, whereas, Huthoff model shows the best agreement with a high coefficient of determination (R2=80%) and the smallest error in the prediction of the average velocities.

Keywords: analytic models, comparison, mean velocity, vegetation

Procedia PDF Downloads 271
5575 Measurement of VIP Edge Conduction Using Vacuum Guarded Hot Plate

Authors: Bongsu Choi, Tae-Ho Song

Abstract:

Vacuum insulation panel (VIP) is a promising thermal insulator for buildings, refrigerator, LNG carrier and so on. In general, it has the thermal conductivity of 2~4 mW/m•K. However, this thermal conductivity is that measured at the center of VIP. The total effective thermal conductivity of VIP is larger than this value due to the edge conduction through the envelope. In this paper, the edge conduction of VIP is examined theoretically, numerically and experimentally. To confirm the existence of the edge conduction, numerical analysis is performed for simple two-dimensional VIP model and a theoretical model is proposed to calculate the edge conductivity. Also, the edge conductivity is measured using the vacuum guarded hot plate and the experiment is validated against numerical analysis. The results show that the edge conductivity is dependent on the width of panel and thickness of Al-foil. To reduce the edge conduction, it is recommended that the VIP should be made as big as possible or made of thin Al film envelope.

Keywords: envelope, edge conduction, thermal conductivity, vacuum insulation panel

Procedia PDF Downloads 397
5574 BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching

Authors: Gianna Zou

Abstract:

Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed.

Keywords: BART, Bayesian, matching, regression

Procedia PDF Downloads 139
5573 Solution Growth of Titanium Nitride Nanowires for Implantation Application

Authors: Roaa Sait, Richard Cross

Abstract:

The synthesis and characterization of one dimensional nanostructure such as nanowires has received considerable attention. Much effort has concentrated on TiN material especially in the biological field due to its useful and unique properties in this field. Therefore, for the purpose of this project, synthesis of Titanium Nitride (TiN) nanowires (NWs) will be presented. They will be synthesised by growing titanium dioxide (Ti) NWs in an aqueous solution at low temperatures under atmospheric pressure. Then the grown nanowires will undergo a 'Nitrodation process' in which results in the formation of TiN NWs. The structure, morphology and composition of the grown nanowires will be characterized using Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), X-ray Diffraction (XRD) and Cyclic Voltammetry (CV). Obtaining TiN NWs is a challenging task since it has not been formulated before, as far as we acknowledge. This might be due to the fact that nitriding Ti NWs can be difficult in terms of optimizing experimental parameters.

Keywords: nanowires, dissolution-growth, nucleation, PECVD, deposition, spin coating, scanning electron microscopic analysis, cyclic voltammetry analysis

Procedia PDF Downloads 353
5572 Carbon Capture: Growth and Development of Membranes in Gas Sequestration

Authors: Sreevalli Bokka

Abstract:

Various technologies are emerging to capture or reduce carbon intensity from a gas stream, such as industrial effluent air and atmosphere. Of these technologies, filter membranes are emerging as a key player in carbon sequestering. The key advantages of these membranes are their high surface area and porosity. Fabricating a filter membrane that has high selectivity for carbon sequestration is challenging as material properties and processing parameters affect the membrane properties. In this study, the growth of the filter membranes and the critical material properties that impact carbon sequestration are presented.

Keywords: membranes, filtration, separations, polymers, carbon capture

Procedia PDF Downloads 61