Search results for: operational error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3163

Search results for: operational error

1273 Nucleophile Mediated Addition-Fragmentation Generation of Aryl Radicals from Aryl Diazonium Salts

Authors: Elene Tatunashvili, Bun Chan, Philippe E. Nashar, Christopher S. P. McErlean

Abstract:

The reduction of aryl diazonium salts is one of the most efficient ways to generate aryl radicals for use in a wide range of transformations, including Sandmeyer-type reactions, Meerwein arylations of olefins and Gomberg-Bachmann-Hey arylations of heteroaromatic systems. The aryl diazonium species can be reduced electrochemically, by UV irradiation, inner-sphere and outer-sphere single electron transfer processes (SET) from metal salts, SET from photo-excited organic catalysts or fragmentation of adducts with weak bases (acetate, hydroxide, etc.). This paper details an approach for the metal-free reduction of aryl diazonium salts, which facilitates the efficient synthesis of various aromatic compounds under exceedingly mild reaction conditions. By measuring the oxidation potential of a number of organic molecules, a series of nucleophiles were identified that reduce aryl diazonium salts via the addition-fragmentation mechanism. This approach leads to unprecedented operational simplicity: The reactions are very rapid and proceed in the open air; there is no need for external irradiation or heating, and the process is compatible with a large number of radical reactions. We illustrate these advantages by using the addition-fragmentation strategy to regioselectively arylate a series of heterocyclic compounds, to synthesize ketones by arylation of silyl enol ethers, and to synthesize benzothiophene and phenanthrene derivatives by radical annulation reactions.

Keywords: diazonium salts, hantzsch esters, oxygen, radical reactions, synthetic methods

Procedia PDF Downloads 138
1272 Survival Analysis Based Delivery Time Estimates for Display FAB

Authors: Paul Han, Jun-Geol Baek

Abstract:

In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.

Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model

Procedia PDF Downloads 523
1271 M-Machine Assembly Scheduling Problem to Minimize Total Tardiness with Non-Zero Setup Times

Authors: Harun Aydilek, Asiye Aydilek, Ali Allahverdi

Abstract:

Our objective is to minimize the total tardiness in an m-machine two-stage assembly flowshop scheduling problem. The objective is an important performance measure because of the fact that the fulfillment of due dates of customers has to be taken into account while making scheduling decisions. In the literature, the problem is considered with zero setup times which may not be realistic and appropriate for some scheduling environments. Considering separate setup times from processing times increases machine utilization by decreasing the idle time and reduces total tardiness. We propose two new algorithms and adapt four existing algorithms in the literature which are different versions of simulated annealing and genetic algorithms. Moreover, a dominance relation is developed based on the mathematical formulation of the problem. The developed dominance relation is incorporated in our proposed algorithms. Computational experiments are conducted to investigate the performance of the newly proposed algorithms. We find that one of the proposed algorithms performs significantly better than the others, i.e., the error of the best algorithm is less than those of the other algorithms by minimum 50%. The newly proposed algorithm is also efficient for the case of zero setup times and performs better than the best existing algorithm in the literature.

Keywords: algorithm, assembly flowshop, scheduling, simulation, total tardiness

Procedia PDF Downloads 313
1270 A Stochastic Volatility Model for Optimal Market-Making

Authors: Zubier Arfan, Paul Johnson

Abstract:

The electronification of financial markets and the rise of algorithmic trading has sparked a lot of interest from the mathematical community, for the market making-problem in particular. The research presented in this short paper solves the classic stochastic control problem in order to derive the strategy for a market-maker. It also shows how to calibrate and simulate the strategy with real limit order book data for back-testing. The ambiguity of limit-order priority in back-testing is dealt with by considering optimistic and pessimistic priority scenarios. The model, although it does outperform a naive strategy, assumes constant volatility, therefore, is not best suited to the LOB data. The Heston model is introduced to describe the price and variance process of the asset. The Trader's constant absolute risk aversion utility function is optimised by numerically solving a 3-dimensional Hamilton-Jacobi-Bellman partial differential equation to find the optimal limit order quotes. The results show that the stochastic volatility market-making model is more suitable for a risk-averse trader and is also less sensitive to calibration error than the constant volatility model.

Keywords: market-making, market-microsctrucure, stochastic volatility, quantitative trading

Procedia PDF Downloads 132
1269 Correlation Between Ore Mineralogy and the Dissolution Behavior of K-Feldspar

Authors: Adrian Keith Caamino, Sina Shakibania, Lena Sunqvist-Öqvist, Jan Rosenkranz, Yousef Ghorbani

Abstract:

Feldspar minerals are one of the main components of the earth’s crust. They are tectosilicate, meaning that they mainly contain aluminum and silicon. Besides aluminum and silicon, they contain either potassium, sodium, or calcium. Accordingly, feldspar minerals are categorized into three main groups: K-feldspar, Na-feldspar, and Ca-feldspar. In recent years, the trend to use K-feldspar has grown tremendously, considering its potential to produce potash and alumina. However, the feldspar minerals, in general, are difficult to decompose for the dissolution of their metallic components. Several methods, including intensive milling, leaching under elevated pressure and temperature, thermal pretreatment, and the use of corrosive leaching reagents, have been proposed to improve its low dissolving efficiency. In this study, as part of the POTASSIAL EU project, to overcome the low dissolution efficiency of the K-feldspar components, mechanical activation using intensive milling followed by leaching using hydrochloric acid (HCl) was practiced. Grinding operational parameters, namely time, rotational speed, and ball-to-sample weight ratio, were studied using the Taguchi optimization method. Then, the mineralogy of the grinded samples was analyzed using a scanning electron microscope (SEM) equipped with automated quantitative mineralogy. After grinding, the prepared samples were subjected to HCl leaching. In the end, the dissolution efficiency of the main elements and impurities of different samples were correlated to the mineralogical characterization results. K-feldspar component dissolution is correlated with ore mineralogy, which provides insight into how to best optimize leaching conditions for selective dissolution. Further, it will have an effect on purifying steps taken afterward and the final value recovery procedures

Keywords: K-feldspar, grinding, automated mineralogy, impurity, leaching

Procedia PDF Downloads 61
1268 Tracking Filtering Algorithm Based on ConvLSTM

Authors: Ailing Yang, Penghan Song, Aihua Cai

Abstract:

The nonlinear maneuvering target tracking problem is mainly a state estimation problem when the target motion model is uncertain. Traditional solutions include Kalman filtering based on Bayesian filtering framework and extended Kalman filtering. However, these methods need prior knowledge such as kinematics model and state system distribution, and their performance is poor in state estimation of nonprior complex dynamic systems. Therefore, in view of the problems existing in traditional algorithms, a convolution LSTM target state estimation (SAConvLSTM-SE) algorithm based on Self-Attention memory (SAM) is proposed to learn the historical motion state of the target and the error distribution information measured at the current time. The measured track point data of airborne radar are processed into data sets. After supervised training, the data-driven deep neural network based on SAConvLSTM can directly obtain the target state at the next moment. Through experiments on two different maneuvering targets, we find that the network has stronger robustness and better tracking accuracy than the existing tracking methods.

Keywords: maneuvering target, state estimation, Kalman filter, LSTM, self-attention

Procedia PDF Downloads 142
1267 Uncertainty Assessment in Building Energy Performance

Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud

Abstract:

The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.

Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method

Procedia PDF Downloads 442
1266 Taking the Whole Picture to Your Supply Chain; Customers Will Take Selfies When Expectations Are Met

Authors: Marcelo Sifuentes López

Abstract:

Strategic performance definition and follow-up processes have to be clear in order to provide value in today’s competitive world. Customer expectations must be linked to internal organization strategic objectives leading to profitability and supported by visibility and flexibility among others.By taking a whole picture of the supply chain, the executive, and its team will define the current supply chain situation and an insight into potential opportunities to improve processes and provide value to main stakeholders. A systematic performance evaluation process based on operational and financial indicators defined by customer requirements needs to be implemented and periodically reviewed in order to mitigate costs and risks on time.Supplier long term relationship and collaboration plays a key role using resources available, real-time communication, innovation and new ways to capitalize global opportunities like emerging markets; efforts have to focus on the reduction of uncertainties in supply and demand. Leadership has to promote consistency of communication and execution involving suppliers, customers, and the entire organization through the support of a strategic sourcing methodology that assure the targeted competitive strategy and sustainable growth. As customer requirements and expectations are met, results could be captured in a casual picture like a “selfie”; where outcomes could be perceived from any desired angle by them; or like most “selfies”, can be taken with a camera held at arm's length by a third party company rather than using a self-timer.

Keywords: supply chain management, competitive advantage, value creation, collaboration and innovation, global marketplace

Procedia PDF Downloads 428
1265 Real Time Implementation of Efficient DFIG-Variable Speed Wind Turbine Control

Authors: Fayssal Amrane, Azeddine Chaiba, Bruno Francois

Abstract:

In this paper, design and experimental study based on Direct Power Control (DPC) of DFIG is proposed for Stand-alone mode in Variable Speed Wind Energy Conversion System (VS-WECS). The proposed IDPC method based on robust IP (Integral-Proportional) controllers in order to control the Rotor Side Converter (RSC) by the means of the rotor current d-q axes components (Ird* and Irq*) of Doubly Fed Induction Generator (DFIG) through AC-DC-AC converter. The implementation is realized using dSPACE dS1103 card under Sub and Super-synchronous operations (means < and > of the synchronous speed “1500 rpm”). Finally, experimental results demonstrate that the proposed control using IP provides improved dynamic responses, and decoupled control of the wind turbine has driven DFIG with high performances (good reference tracking, short response time and low power error) despite for sudden variation of wind speed and rotor references currents.

Keywords: Direct Power Control (DPC), Doubly fed induction generator (DFIG), Wind Energy Conversion System (WECS), Experimental study.

Procedia PDF Downloads 115
1264 Enhancing a Recidivism Prediction Tool with Machine Learning: Effectiveness and Algorithmic Fairness

Authors: Marzieh Karimihaghighi, Carlos Castillo

Abstract:

This work studies how Machine Learning (ML) may be used to increase the effectiveness of a criminal recidivism risk assessment tool, RisCanvi. The two key dimensions of this analysis are predictive accuracy and algorithmic fairness. ML-based prediction models obtained in this study are more accurate at predicting criminal recidivism than the manually-created formula used in RisCanvi, achieving an AUC of 0.76 and 0.73 in predicting violent and general recidivism respectively. However, the improvements are small, and it is noticed that algorithmic discrimination can easily be introduced between groups such as national vs foreigner, or young vs old. It is described how effectiveness and algorithmic fairness objectives can be balanced, applying a method in which a single error disparity in terms of generalized false positive rate is minimized, while calibration is maintained across groups. Obtained results show that this bias mitigation procedure can substantially reduce generalized false positive rate disparities across multiple groups. Based on these results, it is proposed that ML-based criminal recidivism risk prediction should not be introduced without applying algorithmic bias mitigation procedures.

Keywords: algorithmic fairness, criminal risk assessment, equalized odds, recidivism

Procedia PDF Downloads 136
1263 Walmart Sales Forecasting using Machine Learning in Python

Authors: Niyati Sharma, Om Anand, Sanjeev Kumar Prasad

Abstract:

Assuming future sale value for any of the organizations is one of the major essential characteristics of tactical development. Walmart Sales Forecasting is the finest illustration to work with as a beginner; subsequently, it has the major retail data set. Walmart uses this sales estimate problem for hiring purposes also. We would like to analyzing how the internal and external effects of one of the largest companies in the US can walk out their Weekly Sales in the future. Demand forecasting is the planned prerequisite of products or services in the imminent on the basis of present and previous data and different stages of the market. Since all associations is facing the anonymous future and we do not distinguish in the future good demand. Hence, through exploring former statistics and recent market statistics, we envisage the forthcoming claim and building of individual goods, which are extra challenging in the near future. As a result of this, we are producing the required products in pursuance of the petition of the souk in advance. We will be using several machine learning models to test the exactness and then lastly, train the whole data by Using linear regression and fitting the training data into it. Accuracy is 8.88%. The extra trees regression model gives the best accuracy of 97.15%.

Keywords: random forest algorithm, linear regression algorithm, extra trees classifier, mean absolute error

Procedia PDF Downloads 129
1262 Structural Health Monitoring of Offshore Structures Using Wireless Sensor Networking under Operational and Environmental Variability

Authors: Srinivasan Chandrasekaran, Thailammai Chithambaram, Shihas A. Khader

Abstract:

The early-stage damage detection in offshore structures requires continuous structural health monitoring and for the large area the position of sensors will also plays an important role in the efficient damage detection. Determining the dynamic behavior of offshore structures requires dense deployment of sensors. The wired Structural Health Monitoring (SHM) systems are highly expensive and always needs larger installation space to deploy. Wireless sensor networks can enhance the SHM system by deployment of scalable sensor network, which consumes lesser space. This paper presents the results of wireless sensor network based Structural Health Monitoring method applied to a scaled experimental model of offshore structure that underwent wave loading. This method determines the serviceability of the offshore structure which is subjected to various environment loads. Wired and wireless sensors were installed in the model and the response of the scaled BLSRP model under wave loading was recorded. The wireless system discussed in this study is the Raspberry pi board with Arm V6 processor which is programmed to transmit the data acquired by the sensor to the server using Wi-Fi adapter, the data is then hosted in the webpage. The data acquired from the wireless and wired SHM systems were compared and the design of the wireless system is verified.

Keywords: condition assessment, damage detection, structural health monitoring, structural response, wireless sensor network

Procedia PDF Downloads 259
1261 Machine Learning Approach for Mutation Testing

Authors: Michael Stewart

Abstract:

Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.

Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing

Procedia PDF Downloads 178
1260 Radio Frequency Identification Device Based Emergency Department Critical Care Billing: A Framework for Actionable Intelligence

Authors: Shivaram P. Arunachalam, Mustafa Y. Sir, Andy Boggust, David M. Nestler, Thomas R. Hellmich, Kalyan S. Pasupathy

Abstract:

Emergency departments (EDs) provide urgent care to patients throughout the day in a complex and chaotic environment. Real-time location systems (RTLS) are increasingly being utilized in healthcare settings, and have shown to improve safety, reduce cost, and increase patient satisfaction. Radio Frequency Identification Device (RFID) data in an ED has been shown to compute variables such as patient-provider contact time, which is associated with patient outcomes such as 30-day hospitalization. These variables can provide avenues for improving ED operational efficiency. A major challenge with ED financial operations is under-coding of critical care services due to physicians’ difficulty reporting accurate times for critical care provided under Current Procedural Terminology (CPT) codes 99291 and 99292. In this work, the authors propose a framework to optimize ED critical care billing using RFID data. RFID estimated physician-patient contact times could accurately quantify direct critical care services which will help model a data-driven approach for ED critical care billing. This paper will describe the framework and provide insights into opportunities to prevent under coding as well as over coding to avoid insurance audits. Future work will focus on data analytics to demonstrate the feasibility of the framework described.

Keywords: critical care billing, CPT codes, emergency department, RFID

Procedia PDF Downloads 116
1259 Wear Resistance and Mechanical Performance of Ultra-High Molecular Weight Polyethylene Influenced by Temperature Change

Authors: Juan Carlos Baena, Zhongxiao Peng

Abstract:

Ultra-high molecular weight polyethylene (UHMWPE) is extensively used in industrial and biomedical fields. The slippery nature of UHMWPE makes this material suitable for surface bearing applications, however, the operational conditions limit the lubrication efficiency, inducing boundary and mixed lubrication in the tribological system. The lack of lubrication in a tribological system intensifies friction, contact stress and consequently, operating temperature. With temperature increase, the material’s mechanical properties are affected, and the lifespan of the component is reduced. The understanding of how mechanical properties and wear performance of UHMWPE change when the temperature is increased has not been clearly identified. The understanding of the wear and mechanical performance of UHMWPE at different temperature is important to predict and further improve the lifespan of these components. This study evaluates the effects of temperature variation in a range of 20 °C to 60 °C on the hardness and the wear resistance of UHMWPE. A reduction of the hardness and wear resistance was observed with the increase in temperature. The variation of the wear rate increased 94.8% when the temperature changed from 20 °C to 50 °C. Although hardness is regarded to be an indicator of the material wear resistance, this study found that wear resistance decreased more rapidly than hardness with the temperature increase, evidencing a low material stability of this component in a short temperature interval. The reduction of the hardness was reflected by the plastic deformation and abrasion intensity, resulting in a significant wear rate increase.

Keywords: hardness, surface bearing, tribological system, UHMWPE, wear

Procedia PDF Downloads 251
1258 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 253
1257 Interlingual Interference in Students’ Writing

Authors: Zakaria Khatraoui

Abstract:

Interlanguage has transcendentally capitalized its central role over a considerable metropolitan landscape. Either academically driven or pedagogically oriented, Interlanguage has principally floated as important than ever before. It academically probes theoretical and linguistic issues in the turf and further malleably flows from idea to reality to vindicate a bridging philosophy between theory and educational rehearsal. Characteristically, the present research grants a prolifically developed theoretical framework that is conversely sustained by empirical teaching practices, along with teasing apart the narrowly confined implementation. The focus of this interlingual study is placed stridently on syntactic errors projected in students’ writing as performance. To attain this endeavor, the paper appropriates qualitatively a plethora of focal methodological choices sponsored by a solid design. The steadily undeniable ipso facto to be examined is the creative sense of syntactic errors unequivocally endorsed by the tangible dominance of cognitively intralingual errors over linguistically interlingual ones. Subsequently, this paper attempts earnestly to highlight transferable implications worth indicating both theoretical and pedagogically professional principles. In particular, results are fundamentally relative to the scholarly community in a multidimensional sense to recommend actions of educational value.

Keywords: interlanguage, interference, error, writing

Procedia PDF Downloads 49
1256 Opportunities for Precision Feed in Apiculture

Authors: John Michael Russo

Abstract:

Honeybees are important to our food system and continue to suffer from high rates of colony loss. Precision feed has brought many benefits to livestock cultivation and these should transfer to apiculture. However, apiculture has unique challenges. The objective of this research is to understand how principles of precision agriculture, applied to apiculture and feed specifically, might effectively improve state-of-the-art cultivation. The methodology surveys apicultural practice to build a model for assessment. First, a review of apicultural motivators is made. Feed method is then evaluated. Finally, precision feed methods are examined as accelerants with potential to advance the effectiveness of feed practice. Six important motivators emerge: colony loss, disease, climate change, site variance, operational costs, and competition. Feed practice itself is used to compensate for environmental variables. The research finds that the current state-of-the-art in apiculture feed focuses on critical challenges in the management of feed schedules which satisfy requirements of the bees, preserve potency, optimize environmental variables, and manage costs. Many of the challenges are most acute when feed is used to dispense medication. Technology such as RNA treatments have even more rigorous demands. Precision feed solutions focus on strategies which accommodate specific needs of individual livestock. A major component is data; they integrate precise data with methods that respond to individual needs. There is enormous opportunity for precision feed to improve apiculture through the integration of precision data with policies to translate data into optimized action in the apiary, particularly through automation.

Keywords: precision agriculture, precision feed, apiculture, honeybees

Procedia PDF Downloads 60
1255 Identification of Membrane Foulants in Direct Contact Membrane Distillation for the Treatment of Reject Brine

Authors: Shefaa Mansour, Hassan Arafat, Shadi Hasan

Abstract:

Management of reverse osmosis (RO) brine has become a major area of research due to the environmental concerns associated with it. This study worked on studying the feasibility of the direct contact membrane distillation (DCMD) system in the treatment of this RO brine. The system displayed great potential in terms of its flux and salt rejection, where different operating conditions such as the feed temperature, feed salinity, feed and permeate flow rates were varied. The highest flux of 16.7 LMH was reported with a salt rejection of 99.5%. Although the DCMD has displayed potential of enhanced water recovery from highly saline solutions, one of the major drawbacks associated with the operation is the fouling of the membranes which impairs the system performance. An operational run of 77 hours for the treatment of RO brine of 56,500 ppm salinity was performed in order to investigate the impact of fouling of the membrane on the overall operation of the system over long time operations. Over this time period, the flux was observed to have reduced by four times its initial flux. The fouled membrane was characterized through different techniques for the identification of the organic and inorganic foulants that have deposited on the membrane surface. The Infrared Spectroscopy method (IR) was used to identify the organic foulants where SEM images displayed the surface characteristics of the membrane. As for the inorganic foulants, they were identified using X-ray Diffraction (XRD), Ion Chromatography (IC) and Energy Dispersive Spectroscopy (EDS). The major foulants found on the surface of the membrane were inorganic salts such as sodium chloride and calcium sulfate.

Keywords: brine treatment, membrane distillation, fouling, characterization

Procedia PDF Downloads 420
1254 A Study of Behaviors in Using Social Networks of Corporate Personnel of Suan Sunandha Rajabhat University

Authors: Wipada Chaiwchan

Abstract:

This research aims to study behaviors in using social networks of Corporate personnel of Suan Sunandha Rajabhat University. The sample used in the study were two groups: 1) Academic Officer 70 persons and 2) Operation Officer 143 persons were used in this study. The tools in this research consisted of questionnaire which the data were analyzed by using percentage, average (X) and Standard deviation (S.D.) and Independent Sample T-Test to test the difference between the mean values obtained from two independent samples, and One-way anova to analysis of variance, and Multiple comparisons to test that the average pair of different methods by Fisher’s Least Significant Different (LSD). The study result found that the most of corporate personnel have purpose in using social network to information awareness aspect was knowledge and online conference with social media. By using the average more than 3 hours per day in everyday. Using time in working in one day and there are computers connected to the Internet at home, by using the communication in the operational processes. Behaviors using social networks in relation to gender, age, job title, department, and type of personnel. Hypothesis testing, and analysis of variance for the effects of this analysis is divided into three aspects: The use of online social networks, the attitude of the users and the security analysis has found that Corporate Personnel of Suan Sunandha Rajabhat University. Overall and specifically at the high level, and considering each item found all at a high level. By sorting of the social network (X=3.22), The attitude of the users (X= 3.06) and the security (X= 3.11). The overall behaviors using of each side (X=3.11).

Keywords: social network, behaviors, social media, computer information systems

Procedia PDF Downloads 381
1253 Structural Equation Modeling Semiparametric Truncated Spline Using Simulation Data

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

SEM analysis is a complex multivariate analysis because it involves a number of exogenous and endogenous variables that are interconnected to form a model. The measurement model is divided into two, namely, the reflective model (reflecting) and the formative model (forming). Before carrying out further tests on SEM, there are assumptions that must be met, namely the linearity assumption, to determine the form of the relationship. There are three modeling approaches to path analysis, including parametric, nonparametric and semiparametric approaches. The aim of this research is to develop semiparametric SEM and obtain the best model. The data used in the research is secondary data as the basis for the process of obtaining simulation data. Simulation data was generated with various sample sizes of 100, 300, and 500. In the semiparametric SEM analysis, the form of the relationship studied was determined, namely linear and quadratic and determined one and two knot points with various levels of error variance (EV=0.5; 1; 5). There are three levels of closeness of relationship for the analysis process in the measurement model consisting of low (0.1-0.3), medium (0.4-0.6) and high (0.7-0.9) levels of closeness. The best model lies in the form of the relationship X1Y1 linear, and. In the measurement model, a characteristic of the reflective model is obtained, namely that the higher the closeness of the relationship, the better the model obtained. The originality of this research is the development of semiparametric SEM, which has not been widely studied by researchers.

Keywords: semiparametric SEM, measurement model, structural model, reflective model, formative model

Procedia PDF Downloads 19
1252 Long Term Evolution Multiple-Input Multiple-Output Network in Unmanned Air Vehicles Platform

Authors: Ashagrie Getnet Flattie

Abstract:

Line-of-sight (LOS) information, data rates, good quality, and flexible network service are limited by the fact that, for the duration of any given connection, they experience severe variation in signal strength due to fading and path loss. Wireless system faces major challenges in achieving wide coverage and capacity without affecting the system performance and to access data everywhere, all the time. In this paper, the cell coverage and edge rate of different Multiple-input multiple-output (MIMO) schemes in 20 MHz Long Term Evolution (LTE) system under Unmanned Air Vehicles (UAV) platform are investigated. After some background on the enormous potential of UAV, MIMO, and LTE in wireless links, the paper highlights the presented system model which attempts to realize the various benefits of MIMO being incorporated into UAV platform. The performances of the three MIMO LTE schemes are compared with the performance of 4x4 MIMO LTE in UAV scheme carried out to evaluate the improvement in cell radius, BER, and data throughput of the system in different morphology. The results show that significant performance gains such as bit error rate (BER), data rate, and coverage can be achieved by using the presented scenario.

Keywords: LTE, MIMO, path loss, UAV

Procedia PDF Downloads 261
1251 Regionalization of IDF Curves with L-Moments for Storm Events

Authors: Noratiqah Mohd Ariff, Abdul Aziz Jemain, Mohd Aftar Abu Bakar

Abstract:

The construction of Intensity-Duration-Frequency (IDF) curves is one of the most common and useful tools in order to design hydraulic structures and to provide a mathematical relationship between rainfall characteristics. IDF curves, especially those in Peninsular Malaysia, are often built using moving windows of rainfalls. However, these windows do not represent the actual rainfall events since the duration of rainfalls is usually prefixed. Hence, instead of using moving windows, this study aims to find regionalized distributions for IDF curves of extreme rainfalls based on storm events. Homogeneity test is performed on annual maximum of storm intensities to identify homogeneous regions of storms in Peninsular Malaysia. The L-moment method is then used to regionalized Generalized Extreme Value (GEV) distribution of these annual maximums and subsequently. IDF curves are constructed using the regional distributions. The differences between the IDF curves obtained and IDF curves found using at-site GEV distributions are observed through the computation of the coefficient of variation of root mean square error, mean percentage difference and the coefficient of determination. The small differences implied that the construction of IDF curves could be simplified by finding a general probability distribution of each region. This will also help in constructing IDF curves for sites with no rainfall station.

Keywords: IDF curves, L-moments, regionalization, storm events

Procedia PDF Downloads 508
1250 Application of Griddization Management to Construction Hazard Management

Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu

Abstract:

Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.

Keywords: construction hazard, griddization computing, grid management, process

Procedia PDF Downloads 256
1249 Spatio-Temporal Variation of Suspended Sediment Concentration in the near Shore Waters, Southern Karnataka, India

Authors: Ateeth Shetty, K. S. Jayappa, Ratheesh Ramakrishnan, A. S. Rajawat

Abstract:

Suspended Sediment Concentration (SSC) was estimated for the period of four months (November, 2013 to February 2014) using Oceansat-2 (Ocean Colour Monitor) satellite images to understand the coastal dynamics and regional sediment transport, especially distribution and budgeting in coastal waters. The coastal zone undergoes continuous changes due to natural processes and anthropogenic activities. The importance of the coastal zone, with respect to safety, ecology, economy and recreation, demands a management strategy in which each of these aspects is taken into account. Monitoring and understanding the sediment dynamics and suspended sediment transport is an important issue for coastal engineering related activities. A study of the transport mechanism of suspended sediments in the near shore environment is essential not only to safeguard marine installations or navigational channels, but also for the coastal structure design, environmental protection and disaster reduction. Such studies also help in assessment of pollutants and other biological activities in the region. An accurate description of the sediment transport, caused by waves and tidal or wave-induced currents, is of great importance in predicting coastal morphological changes. Satellite-derived SSC data have been found to be useful for Indian coasts because of their high spatial (360 m), spectral and temporal resolutions. The present paper outlines the applications of state‐of‐the‐art operational Indian Remote Sensing satellite, Oceansat-2 to study the dynamics of sediment transport.

Keywords: suspended sediment concentration, ocean colour monitor, sediment transport, case – II waters

Procedia PDF Downloads 239
1248 Optimization of the Drinking Water Treatment Process Improvement of the Treated Water Quality by Using the Sludge Produced by the Water Treatment Plant

Authors: M. Derraz, M. Farhaoui

Abstract:

Problem statement: In the water treatment processes, the coagulation and flocculation processes produce sludge according to the level of the water turbidity. The aluminum sulfate is the most common coagulant used in water treatment plants of Morocco as well as many countries. It is difficult to manage Sludge produced by the treatment plant. However, it can be used in the process to improve the quality of the treated water and reduce the aluminum sulfate dose. Approach: In this study, the effectiveness of sludge was evaluated at different turbidity levels (low, medium, and high turbidity) and coagulant dosage to find optimal operational conditions. The influence of settling time was also studied. A set of jar test experiments was conducted to find the sludge and aluminum sulfate dosages in order to improve the produced water quality for different turbidity levels. Results: Results demonstrated that using sludge produced by the treatment plant can improve the quality of the produced water and reduce the aluminum sulfate using. The aluminum sulfate dosage can be reduced from 40 to 50% according to the turbidity level (10, 20, and 40 NTU). Conclusions/Recommendations: Results show that sludge can be used in order to reduce the aluminum sulfate dosage and improve the quality of treated water. The highest turbidity removal efficiency is observed within 6 mg/l of aluminum sulfate and 35 mg/l of sludge in low turbidity, 20 mg/l of aluminum sulfate and 50 mg/l of sludge in medium turbidity and 20 mg/l of aluminum sulfate and 60 mg/l of sludge in high turbidity. The turbidity removal efficiency is 97.56%, 98.96%, and 99.47% respectively for low, medium and high turbidity levels.

Keywords: coagulation process, coagulant dose, sludge reuse, turbidity removal

Procedia PDF Downloads 220
1247 Algorithms Minimizing Total Tardiness

Authors: Harun Aydilek, Asiye Aydilek, Ali Allahverdi

Abstract:

The total tardiness is a widely used performance measure in the scheduling literature. This performance measure is particularly important in situations where there is a cost to complete a job beyond its due date. The cost of scheduling increases as the gap between a job's due date and its completion time increases. Such costs may also be penalty costs in contracts, loss of goodwill. This performance measure is important as the fulfillment of due dates of customers has to be taken into account while making scheduling decisions. The problem is addressed in the literature, however, it has been assumed zero setup times. Even though this assumption may be valid for some environments, it is not valid for some other scheduling environments. When setup times are treated as separate from processing times, it is possible to increase machine utilization and to reduce total tardiness. Therefore, non-zero setup times need to be considered as separate. A dominance relation is developed and several algorithms are proposed. The developed dominance relation is utilized in the proposed algorithms. Extensive computational experiments are conducted for the evaluation of the algorithms. The experiments indicated that the developed algorithms perform much better than the existing algorithms in the literature. More specifically, one of the newly proposed algorithms reduces the error of the best existing algorithm in the literature by 40 percent.

Keywords: algorithm, assembly flowshop, dominance relation, total tardiness

Procedia PDF Downloads 337
1246 Optical Signal-To-Noise Ratio Monitoring Based on Delay Tap Sampling Using Artificial Neural Network

Authors: Feng Wang, Shencheng Ni, Shuying Han, Shanhong You

Abstract:

With the development of optical communication, optical performance monitoring (OPM) has received more and more attentions. Since optical signal-to-noise ratio (OSNR) is directly related to bit error rate (BER), it is one of the important parameters in optical networks. Recently, artificial neural network (ANN) has been greatly developed. ANN has strong learning and generalization ability. In this paper, a method of OSNR monitoring based on delay-tap sampling (DTS) and ANN has been proposed. DTS technique is used to extract the eigenvalues of the signal. Then, the eigenvalues are input into the ANN to realize the OSNR monitoring. The experiments of 10 Gb/s non-return-to-zero (NRZ) on–off keying (OOK), 20 Gb/s pulse amplitude modulation (PAM4) and 20 Gb/s return-to-zero (RZ) differential phase-shift keying (DPSK) systems are demonstrated for the OSNR monitoring based on the proposed method. The experimental results show that the range of OSNR monitoring is from 15 to 30 dB and the root-mean-square errors (RMSEs) for 10 Gb/s NRZ-OOK, 20 Gb/s PAM4 and 20 Gb/s RZ-DPSK systems are 0.36 dB, 0.45 dB and 0.48 dB respectively. The impact of chromatic dispersion (CD) on the accuracy of OSNR monitoring is also investigated in the three experimental systems mentioned above.

Keywords: artificial neural network (ANN), chromatic dispersion (CD), delay-tap sampling (DTS), optical signal-to-noise ratio (OSNR)

Procedia PDF Downloads 96
1245 A Review of the Factors That Influence on Nutrient Removal in Upflow Filters

Authors: Ali Alzeyadi, Edward Loffill, Rafid Alkhaddar Ali Alattabi

Abstract:

Phosphate, ammonium, and nitrates are forms of nutrients; they are released from different sources. High nutrient levels contribute to the eutrophication of water bodies by accelerating the extraordinary growth of algae. Recently, many filtration and treatment systems were developed and used for different removal processes. Due to enhanced operational aspects for the up-flow, continuous, granular Media filter researchers became more interested in further developing this technology and its performance for nutrient removal from wastewater. Environmental factors significantly affect the filtration process performance, and understanding their impact will help to maintain the nutrient removal process. Phosphate removal by phosphate sorption materials PSMs and nitrogen removal biologically are the methods of nutrient removal that have been discussed in this paper. Hence, the focus on the factors that influence these processes is the scope of this work. The finding showed the presence of factors affecting both removal processes; the size, shape, and roughness of the filter media particles play a crucial role in supporting biofilm formation. On the other hand, all of which are effected on the reactivity of surface between the media and phosphate. Many studies alluded to factors that have significant influence on the biological removal for nitrogen such as dissolved oxygen, temperature, and pH; this is due to the sensitivity of biological processes while the phosphate removal by PSMs showed less affected by these factors. This review work provides help to the researchers in create a comprehensive approach in regards study the nutrient removal in up flow filtration systems.

Keywords: nitrogen biological treatment, nutrients, psms, upflow filter, wastewater treatment

Procedia PDF Downloads 302
1244 Optimized Techniques for Reducing the Reactive Power Generation in Offshore Wind Farms in India

Authors: Pardhasaradhi Gudla, Imanual A.

Abstract:

The generated electrical power in offshore needs to be transmitted to grid which is located in onshore by using subsea cables. Long subsea cables produce reactive power, which should be compensated in order to limit transmission losses, to optimize the transmission capacity, and to keep the grid voltage within the safe operational limits. Installation cost of wind farm includes the structure design cost and electrical system cost. India has targeted to achieve 175GW of renewable energy capacity by 2022 including offshore wind power generation. Due to sea depth is more in India, the installation cost will be further high when compared to European countries where offshore wind energy is already generating successfully. So innovations are required to reduce the offshore wind power project cost. This paper presents the optimized techniques to reduce the installation cost of offshore wind firm with respect to electrical transmission systems. This technical paper provides the techniques for increasing the current carrying capacity of subsea cable by decreasing the reactive power generation (capacitance effect) of the subsea cable. There are many methods for reactive power compensation in wind power plants so far in execution. The main reason for the need of reactive power compensation is capacitance effect of subsea cable. So if we diminish the cable capacitance of cable then the requirement of the reactive power compensation will be reduced or optimized by avoiding the intermediate substation at midpoint of the transmission network.

Keywords: offshore wind power, optimized techniques, power system, sub sea cable

Procedia PDF Downloads 172