Search results for: partial least squares analysis
28397 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways
Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi
Abstract:
The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore a single PCE-value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (the distance from the rear bumper of a leading vehicle to the rear bumper of the following vehicle) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-least-squares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.Keywords: level of service, capacity analysis, lagging headway, trucks
Procedia PDF Downloads 35528396 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm
Authors: Essam Al Daoud
Abstract:
This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.Keywords: least square, neighbor joining, phylogenetic tree, wild dog pack
Procedia PDF Downloads 32028395 Material Properties Evolution Affecting Demisability for Space Debris Mitigation
Authors: Chetan Mahawar, Sarath Chandran, Sridhar Panigrahi, V. P. Shaji
Abstract:
The ever-growing advancement in space exploration has led to an alarming concern for space debris removal as it restricts further launch operations and adventurous space missions; hence numerous studies have come up with technologies for re-entry predictions and material selection processes for mitigating space debris. The selection of material and operating conditions is determined with the objective of lightweight structure and ability to demise faster subject to spacecraft survivability during its mission. Since the demisability of spacecraft depends on evolving thermal material properties such as emissivity, specific heat capacity, thermal conductivity, radiation intensity, etc. Therefore, this paper presents the analysis of evolving thermal material properties of spacecraft, which affect the demisability process and thus estimate demise time using the demisability model by incorporating evolving thermal properties for sensible heating followed by the complete or partial break-up of spacecraft. The demisability analysis thus concludes the best suitable spacecraft material is based on the least estimated demise time, which fulfills the criteria of design-for-survivability and as well as of design-for-demisability.Keywords: demisability, emissivity, lightweight, re-entry, survivability
Procedia PDF Downloads 11528394 Severity Index Level in Effectively Managing Medium Voltage Underground Power Cable
Authors: Mohd Azraei Pangah Pa'at, Mohd Ruzlin Mohd Mokhtar, Norhidayu Rameli, Tashia Marie Anthony, Huzainie Shafi Abd Halim
Abstract:
Partial Discharge (PD) diagnostic mapping testing is one of the main diagnostic testing techniques that are widely used in the field or onsite testing for underground power cable in medium voltage level. The existence of PD activities is an early indication of insulation weakness hence early detection of PD activities can be determined and provides an initial prediction on the condition of the cable. To effectively manage the results of PD Mapping test, it is important to have acceptable criteria to facilitate prioritization of mitigation action. Tenaga Nasional Berhad (TNB) through Distribution Network (DN) division have developed PD severity model name Severity Index (SI) for offline PD mapping test since 2007 based on onsite test experience. However, this severity index recommendation action had never been revised since its establishment. At presence, PD measurements data have been extensively increased, hence the severity level indication and the effectiveness of the recommendation actions can be analyzed and verified again. Based on the new revision, the recommended action to be taken will be able to reflect the actual defect condition. Hence, will be accurately prioritizing preventive action plan and minimizing maintenance expenditure.Keywords: partial discharge, severity index, diagnostic testing, medium voltage, power cable
Procedia PDF Downloads 18628393 Thermal Analysis for Darcy Forchheimer Effect with Hybrid Ferro Fluid Flow
Authors: Behzad Ali Khan, M. Zubair Akbar Qureshi
Abstract:
The article analyzes the Darcy Forchheimer 2D Hybrid ferrofluid. The flow of a Hybrid ferrofluid is made due to an unsteady porous channel. The classical liquid water is treated as a based liquid. The flow in the permeable region is characterized by the Darcy-Forchheimer relation. Heat transfer phenomena are studied during the flow. The transformation of a partial differential set of equations into a strong ordinary differential frame is formed through appropriate variables. The numerical Shooting Method is executed for solving the simplified set of equations. In addition, a numerical analysis (ND-Solve) is utilized for the convergence of the applied technique. The influence of some flow model quantities like Pr (Prandtle number), r (porous medium parameter), F (Darcy-porous medium parameter), Re (Reynolds number), Pe (Peclet number) on velocity and temperature field are scrutinized and studied through sketches. Certain physical factors like f ''(η) (skin friction coefficient) and θ^'(η) (rate of heat transfer) are first derived and then presented through tables.Keywords: darcy forcheimer, hybrid ferro fluid, porous medium, porous channel
Procedia PDF Downloads 17428392 Compressed Natural Gas (CNG) Injector Research for Dual Fuel Engine
Authors: Adam Majczak, Grzegorz Barański, Marcin Szlachetka
Abstract:
Environmental considerations necessitate the search for new energy sources. One of the available solutions is a partial replacement of diesel fuel by compressed natural gas (CNG) in the compression ignition engines. This type of the engines is used mainly in vans and trucks. These units are also gaining more and more popularity in the passenger car market. In Europe, this part of the market share reaches 50%. Diesel engines are also used in industry in such vehicles as ship or locomotives. Diesel engines have higher emissions of nitrogen oxides in comparison to spark ignition engines. This can be currently limited by optimizing the combustion process and the use of additional systems such as exhaust gas recirculation or AdBlue technology. As a result of the combustion process of diesel fuel also particulate matter (PM) that are harmful to the human health are emitted. Their emission is limited by the use of a particulate filter. One of the method for toxic components emission reduction may be the use of liquid gas fuel such as propane and butane (LPG) or compressed natural gas (CNG). In addition to the environmental aspects, there are also economic reasons for the use of gaseous fuels to power diesel engines. A total or partial replacement of diesel gas is possible. Depending on the used technology and the percentage of diesel fuel replacement, it is possible to reduce the content of nitrogen oxides in the exhaust gas even by 30%, particulate matter (PM) by 95 % carbon monoxide and by 20%, in relation to original diesel fuel. The research object is prototype gas injector designed for direct injection of compressed natural gas (CNG) in compression ignition engines. The construction of the injector allows for it positioning in the glow plug socket, so that the gas is injected directly into the combustion chamber. The cycle analysis of the four-cylinder Andoria ADCR engine with a capacity of 2.6 dm3 for different crankshaft rotational speeds allowed to determine the necessary time for fuel injection. Because of that, it was possible to determine the required mass flow rate of the injector, for replacing as much of the original fuel by gaseous fuel. To ensure a high value of flow inside the injector, supply pressure equal to 1 MPa was applied. High gas supply pressure requires high value of valve opening forces. For this purpose, an injector with hydraulic control system, using a liquid under pressure for the opening process was designed. On the basis of air pressure measurements in the flow line after the injector, the analysis of opening and closing of the valve was made. Measurements of outflow mass of the injector were also carried out. The results showed that the designed injector meets the requirements necessary to supply ADCR engine by the CNG fuel.Keywords: CNG, diesel engine, gas flow, gas injector
Procedia PDF Downloads 49328391 A Clustering Algorithm for Massive Texts
Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen
Abstract:
Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process
Procedia PDF Downloads 43528390 Humans Trust Building in Robots with the Help of Explanations
Authors: Misbah Javaid, Vladimir Estivill-Castro, Rene Hexel
Abstract:
The field of robotics is advancing rapidly to the point where robots have become an integral part of the modern society. These robots collaborate and contribute productively with humans and compensate some shortcomings from human abilities and complement them with their skills. Effective teamwork of humans and robots demands to investigate the critical issue of trust. The field of human-computer interaction (HCI) has already examined trust humans place in technical systems mostly on issues like reliability and accuracy of performance. Early work in the area of expert systems suggested that automatic generation of explanations improved trust and acceptability of these systems. In this work, we augmented a robot with the user-invoked explanation generation proficiency. To measure explanations effect on human’s level of trust, we collected subjective survey measures and behavioral data in a human-robot team task into an interactive, adversarial and partial information environment. The results showed that with the explanation capability humans not only understand and recognize robot as an expert team partner. But, it was also observed that human's learning and human-robot team performance also significantly improved because of the meaningful interaction with the robot in the human-robot team. Moreover, by observing distinctive outcomes, we expect our research outcomes will also provide insights into further improvement of human-robot trustworthy relationships.Keywords: explanation interface, adversaries, partial observability, trust building
Procedia PDF Downloads 20028389 Quintic Spline Solution of Fourth-Order Parabolic Equations Arising in Beam Theory
Authors: Reza Mohammadi, Mahdieh Sahebi
Abstract:
We develop a method based on polynomial quintic spline for numerical solution of fourth-order non-homogeneous parabolic partial differential equation with variable coefficient. By using polynomial quintic spline in off-step points in space and finite difference in time directions, we obtained two three level implicit methods. Stability analysis of the presented method has been carried out. We solve four test problems numerically to validate the derived method. Numerical comparison with other methods shows the superiority of presented scheme.Keywords: fourth-order parabolic equation, variable coefficient, polynomial quintic spline, off-step points
Procedia PDF Downloads 35228388 Clustering Color Space, Time Interest Points for Moving Objects
Authors: Insaf Bellamine, Hamid Tairi
Abstract:
Detecting moving objects in sequences is an essential step for video analysis. This paper mainly contributes to the Color Space-Time Interest Points (CSTIP) extraction and detection. We propose a new method for detection of moving objects. Two main steps compose the proposed method. First, we suggest to apply the algorithm of the detection of Color Space-Time Interest Points (CSTIP) on both components of the Color Structure-Texture Image Decomposition which is based on a Partial Differential Equation (PDE): a color geometric structure component and a color texture component. A descriptor is associated to each of these points. In a second stage, we address the problem of grouping the points (CSTIP) into clusters. Experiments and comparison to other motion detection methods on challenging sequences show the performance of the proposed method and its utility for video analysis. Experimental results are obtained from very different types of videos, namely sport videos and animation movies.Keywords: Color Space-Time Interest Points (CSTIP), Color Structure-Texture Image Decomposition, Motion Detection, clustering
Procedia PDF Downloads 37828387 Opacity Synthesis with Orwellian Observers
Authors: Moez Yeddes
Abstract:
The property of opacity is widely used in the formal verification of security in computer systems and protocols. Opacity is a general language-theoretic scheme of many security properties of systems. Opacity is parametrized with framework in which several security properties of a system can be expressed. A secret behaviour of a system is opaque if a passive attacker can never deduce its occurrence from the system observation. Instead of considering the case of static observability where the set of observable events is fixed off-line or dynamic observability where the set of observable events changes over time depending on the history of the trace, we introduce Orwellian partial observability where unobservable events are not revealed provided that downgrading events never occurs in the future of the trace. Orwellian partial observability is needed to model intransitive information flow. This Orwellian observability is knwon as ipurge function. We show in previous work how to verify opacity for regular secret is opaque for a regular language L w.r.t. an Orwellian projection is PSPACE-complete while it has been proved undecidable even for a regular language L w.r.t. a general Orwellian observation function. In this paper, we address two problems of opacification of a regular secret ϕ for a regular language L w.r.t. an Orwellian projection: Given L and a secret ϕ ∈ L, the first problem consist to compute some minimal regular super-language M of L, if it exists, such that ϕ is opaque for M and the second consists to compute the supremal sub-language M′ of L such that ϕ is opaque for M′. We derive both language-theoretic characterizations and algorithms to solve these two dual problems.Keywords: security policies, opacity, formal verification, orwellian observation
Procedia PDF Downloads 22528386 The Influence of Incorporating in the Concrete of Recycled Waste from Shredding Used Tires and Crushed Glass on Their Characteristics and Behavior
Authors: Samiha Ramdani, Abdelhamid Geuttala
Abstract:
There is no doubt that the batteries increasingly used tires create environmental concerns. Algeria generates large amounts of by industrial and household waste, such as used tires and colored glass bottles and dishes, whose valuation in cementitious materials could be an interesting ecological and economical alternative for broadening eliminating cumbersome landfills. This work is a contribution to the promotion of local materials with the use of waste tires and glass bottle in the development of a new cementitious composite having the acceptable compressive strength and a capacity of improved strains. For this purpose, rubber crumb (GC) from shredding used tires were used as partial replacement of quarry sand with 10%, 20%, 40, 60%. In addition, some mixtures also contain glass powder at15% cement replacement by volume. The compressive strength, tensile strength, deformability, the water permeability and penetration Inions chlorides are studied. As results; an acceptable compressive strength was obtained with the substitution rate of 10% and 20% by volume, the deformability of the composite increases with increased replacement rate. The addition of finely ground glass as a partial replacement of cement concrete increases the resistance to penetration of Inions chloride and reduce the water permeability thereof; then increases their durability.Keywords: crumb rubber, deformability, compressive strength, finely ground glass, durability, behavior law
Procedia PDF Downloads 32128385 An Analytical Method for Bending Rectangular Plates with All Edges Clamped Supported
Authors: Yang Zhong, Heng Liu
Abstract:
The decoupling method and the modified Naiver method are combined for accurate bending analysis of rectangular thick plates with all edges clamped supported. The basic governing equations for Mindlin plates are first decoupled into independent partial differential equations which can be solved separately. Using modified Navier method, the analytic solution of rectangular thick plate with all edges clamped supported is then derived. The solution method used in this paper leave out the complicated derivation for calculating coefficients and obtain the solution to problems directly. Numerical comparisons show the correctness and accuracy of the results at last.Keywords: Mindlin plates, decoupling method, modified Navier method, bending rectangular plates
Procedia PDF Downloads 60028384 Comparative Analysis of Enzyme Activities Concerned in Decomposition of Toluene
Authors: Ayuko Itsuki, Sachiyo Aburatani
Abstract:
In recent years, pollutions of the environment by toxic substances become a serious problem. While there are many methods of environmental clean-up, the methods by microorganisms are considered to be reasonable and safety for environment. Compost is known that it catabolize the meladorous substancess in its production process, however the mechanism of its catabolizing system is not known yet. In the catabolization process, organic matters turn into inorganic by the released enzymes from lots of microorganisms which live in compost. In other words, the cooperative of activated enzymes in the compost decomposes malodorous substances. Thus, clarifying the interaction among enzymes is important for revealing the catabolizing system of meladorous substance in compost. In this study, we utilized statistical method to infer the interaction among enzymes. We developed a method which combined partial correlation with cross correlation to estimate the relevance between enzymes especially from time series data of few variables. Because of using cross correlation, we can estimate not only the associative structure but also the reaction pathway. We applied the developed method to the enzyme measured data and estimated an interaction among the enzymes in decomposition mechanism of toluene.Keywords: enzyme activities, comparative analysis, compost, toluene
Procedia PDF Downloads 27328383 A Proof of the N. Davydov Theorem for Douglis Algebra Valued Functions
Authors: Jean-Marie Vilaire, Ricardo Abreu-Blaya, Juan Bory-Reyes
Abstract:
The classical Beltrami system of elliptic equations generalizes the Cauchy Riemann equation in the complex plane and offers the possibility to consider homogeneous system with no terms of zero order. The theory of Douglis-valued functions, called Hyper-analytic functions, is special case of the above situation. In this note, we prove an analogue of the N. Davydov theorem in the framework of the theory of hyperanalytic functions. The used methodology contemplates characteristic methods of the hypercomplex analysis as well as the singular integral operators and elliptic systems of the partial differential equations theories.Keywords: Beltrami equation, Douglis algebra-valued function, Hypercomplex Cauchy type integral, Sokhotski-Plemelj formulae
Procedia PDF Downloads 25028382 Tourism Satellite Account: Approach and Information System Development
Authors: Pappas Theodoros, Mihail Diakomihalis
Abstract:
Measuring the economic impact of tourism in a benchmark economy is a global concern, with previous measurements being partial and not fully integrated. Tourism is a phenomenon that requires individual consumption of visitors and which should be observed and measured to reveal, thus, the overall contribution of tourism to an economy. The Tourism Satellite Account (TSA) is a critical tool for assessing the annual growth of tourism, providing reliable measurements. This article introduces a system of TSA information that encompasses all the works of the TSA, including input, storage, management, and analysis of data, as well as additional future functions and enhances the efficiency of tourism data management and TSA collection utility. The methodology and results presented offer insights into the development and implementation of TSA.Keywords: tourism satellite account, information system, data-based tourist account, relation database
Procedia PDF Downloads 8428381 Scrutiny and Solving Analytically Nonlinear Differential at Engineering Field of Fluids, Heat, Mass and Wave by New Method AGM
Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili
Abstract:
As all experts know most of engineering system behavior in practical are nonlinear process (especially heat, fluid and mass, etc.) and analytical solving (no numeric) these problems are difficult, complex and sometimes impossible like (fluids and gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure a innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will be emerged after comparing the achieved solutions by Numerical method (Runge-Kutte 4th) and so compare to other methods such as HPM, ADM,… and exact solutions. Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students (engineering and basic science) in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear differential equations, with help of that there is no difficulty for solving nonlinear differential equations(ODE and PDE). In this paper, we investigate and solve 4 types of the nonlinear differential equation with AGM method : 1-Heat and fluid, 2-Unsteady state of nonlinear partial differential, 3-Coupled nonlinear partial differential in wave equation, and 4-Nonlinear integro-differential equation.Keywords: new method AGM, sets of coupled nonlinear equations at engineering field, waves equations, integro-differential, fluid and thermal
Procedia PDF Downloads 54628380 Analysis of Extreme Rainfall Trends in Central Italy
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Marco Cifrodelli, Corrado Corradini
Abstract:
The trend of magnitude and frequency of extreme rainfalls seems to be different depending on the investigated area of the world. In this work, the impact of climate change on extreme rainfalls in Umbria, an inland region of central Italy, is examined using data recorded during the period 1921-2015 by 10 representative rain gauge stations. The study area is characterized by a complex orography, with altitude ranging from 200 to more than 2000 m asl. The climate is very different from zone to zone, with mean annual rainfall ranging from 650 to 1450 mm and mean annual air temperature from 3.3 to 14.2°C. Over the past 15 years, this region has been affected by four significant droughts as well as by six dangerous flood events, all with very large impact in economic terms. A least-squares linear trend analysis of annual maximums over 60 time series selected considering 6 different durations (1 h, 3 h, 6 h, 12 h, 24 h, 48 h) showed about 50% of positive and 50% of negative cases. For the same time series the non-parametrical Mann-Kendall test with a significance level 0.05 evidenced only 3% of cases characterized by a negative trend and no positive case. Further investigations have also demonstrated that the variance and covariance of each time series can be considered almost stationary. Therefore, the analysis on the magnitude of extreme rainfalls supplies the indication that an evident trend in the change of values in the Umbria region does not exist. However, also the frequency of rainfall events, with particularly high rainfall depths values, occurred during a fixed period has also to be considered. For all selected stations the 2-day rainfall events that exceed 50 mm were counted for each year, starting from the first monitored year to the end of 2015. Also, this analysis did not show predominant trends. Specifically, for all selected rain gauge stations the annual number of 2-day rainfall events that exceed the threshold value (50 mm) was slowly decreasing in time, while the annual cumulated rainfall depths corresponding to the same events evidenced trends that were not statistically significant. Overall, by using a wide available dataset and adopting simple methods, the influence of climate change on the heavy rainfalls in the Umbria region is not detected.Keywords: climate changes, rainfall extremes, rainfall magnitude and frequency, central Italy
Procedia PDF Downloads 23628379 Low-Complexity Multiplication Using Complement and Signed-Digit Recoding Methods
Authors: Te-Jen Chang, I-Hui Pan, Ping-Sheng Huang, Shan-Jen Cheng
Abstract:
In this paper, a fast multiplication computing method utilizing the complement representation method and canonical recoding technique is proposed. By performing complements and canonical recoding technique, the number of partial products can be reduced. Based on these techniques, we propose an algorithm that provides an efficient multiplication method. On average, our proposed algorithm is to reduce the number of k-bit additions from (0.25k+logk/k+2.5) to (k/6 +logk/k+2.5), where k is the bit-length of the multiplicand A and multiplier B. We can therefore efficiently speed up the overall performance of the multiplication. Moreover, if we use the new proposes to compute common-multiplicand multiplication, the computational complexity can be reduced from (0.5 k+2 logk/k+5) to (k/3+2 logk/k+5) k-bit additions.Keywords: algorithm design, complexity analysis, canonical recoding, public key cryptography, common-multiplicand multiplication
Procedia PDF Downloads 43528378 The Use of Palm Kernel Shell and Ash for Concrete Production
Authors: J. E. Oti, J. M. Kinuthia, R. Robinson, P. Davies
Abstract:
This work reports the potential of using Palm Kernel (PK) ash and shell as a partial substitute for Portland Cement (PC) and coarse aggregate in the development of mortar and concrete. PK ash and shell are agro-waste materials from palm oil mills, the disposal of PK ash and shell is an environmental problem of concern. The PK ash has pozzolanic properties that enables it as a partial replacement for cement and also plays an important role in the strength and durability of concrete, its use in concrete will alleviate the increasing challenges of scarcity and high cost of cement. In order to investigate the PC replacement potential of PK ash, three types of PK ash were produced at varying temperature (350-750 degrees) and they were used to replace up to 50% PC. The PK shell was used to replace up to 100% coarse aggregate in order to study its aggregate replacement potential. The testing programme included material characterisation, the determination of compressive strength, tensile splitting strength and chemical durability in aggressive sulfate-bearing exposure conditions. The 90 day compressive results showed a significant strength gain (up to 26.2 N/mm2). The Portland cement and conventional coarse aggregate has significantly higher influence in the strength gain compared to the equivalent PK ash and PK shell. The chemical durability results demonstrated that after a prolonged period of exposure, significant strength losses in all the concretes were observed. This phenomenon is explained, due to lower change in concrete morphology and inhibition of reaction species and the final disruption of the aggregate cement paste matrix.Keywords: sustainability, concrete, mortar, palm kernel shell, compressive strength, consistency
Procedia PDF Downloads 39628377 National Culture, Personal Values, and Supervisors’ Ethical Behavior: Examining a Partial Mediation Model of Merton’s Anomie Theory
Authors: Kristine Tuliao
Abstract:
Although it is of primary concern to ensure that supervisors behave appropriately, research shows that unethical behaviors are prevalent and may cost organizations’ economic and reputational damages. Nevertheless, few studies have considered the roles of the different levels of values in shaping one’s ethicality, and the examination of the possible mediation in the process of their influence has been rarely done. To address this gap, this research employs Merton’s anomie theory in designing a mediation analysis to test the direct impacts of national cultural values on supervisors’ justification of unethical behaviors as well as their indirect impacts through personal values. According to Merton’s writings, individual behaviors are affected by the society’s culture given its role in defining the members’ goals as well as the acceptable methods of attaining those goals. Also, Merton’s framework suggests that individuals develop their personal values depending on the assimilation of their society’s culture. Using data of 9,813 supervisors across 30 countries, results of hierarchical linear modeling (HLM) indicated that national cultural values, specifically assertiveness, performance orientation, in-group collectivism, and humane orientation, positively affect supervisors’ unethical inclination. Some cultural values may encourage unethical tendencies, especially if they urge and pressure individuals to attain purely monetary success. In addition, some of the influence of national cultural values went through personal monetary and non-monetary success values, indicating partial mediation. These findings substantiated the assertions of Merton’s anomie theory that national cultural values influence supervisors’ ethics through their integration with personal values. Given that some of the results contradict Merton’s anomie theory propositions, complementary arguments, such as incomplete assimilation of culture, and the probable impact of job position in perceptions, values, and behaviors, could be the plausible rationale for these outcomes. Consequently, this paper advances the understanding of differences in national and personal values and how these factors impact supervisors’ justification of unethical behaviors. Alongside these contributions, suggestions are presented for the public and organizations to craft policies and procedures that will minimize the tendency of supervisors to commit unethical acts.Keywords: mediation model, national culture, personal values, supervisors' ethics
Procedia PDF Downloads 19828376 Support Vector Regression with Weighted Least Absolute Deviations
Authors: Kang-Mo Jung
Abstract:
Least squares support vector machine (LS-SVM) is a penalized regression which considers both fitting and generalization ability of a model. However, the squared loss function is very sensitive to even single outlier. We proposed a weighted absolute deviation loss function for the robustness of the estimates in least absolute deviation support vector machine. The proposed estimates can be obtained by a quadratic programming algorithm. Numerical experiments on simulated datasets show that the proposed algorithm is competitive in view of robustness to outliers.Keywords: least absolute deviation, quadratic programming, robustness, support vector machine, weight
Procedia PDF Downloads 52728375 Development of Method for Detecting Low Concentration of Organophosphate Pesticides in Vegetables Using near Infrared Spectroscopy
Authors: Atchara Sankom, Warapa Mahakarnchanakul, Ronnarit Rittiron, Tanaboon Sajjaanantakul, Thammasak Thongket
Abstract:
Vegetables are frequently contaminated with pesticides residues resulting in the most food safety concern among agricultural products. The objective of this work was to develop a method to detect the organophosphate (OP) pesticides residues in vegetables using Near Infrared (NIR) spectroscopy technique. Low concentration (ppm) of OP pesticides in vegetables were investigated. The experiment was divided into 2 sections. In the first section, Chinese kale spiked with different concentrations of chlorpyrifos pesticide residues (0.5-100 ppm) was chosen as the sample model to demonstrate the appropriate conditions of sample preparation, both for a solution or solid sample. The spiked samples were extracted with acetone. The sample extracts were applied as solution samples, while the solid samples were prepared by the dry-extract system for infrared (DESIR) technique. The DESIR technique was performed by embedding the solution sample on filter paper (GF/A) and then drying. The NIR spectra were measured with the transflectance mode over wavenumber regions of 12,500-4000 cm⁻¹. The QuEChERS method followed by gas chromatography-mass spectrometry (GC-MS) was performed as the standard method. The results from the first section showed that the DESIR technique with NIR spectroscopy demonstrated good accurate calibration result with R² of 0.93 and RMSEP of 8.23 ppm. However, in the case of solution samples, the prediction regarding the NIR-PLSR (partial least squares regression) equation showed poor performance (R² = 0.16 and RMSEP = 23.70 ppm). In the second section, the DESIR technique coupled with NIR spectroscopy was applied to the detection of OP pesticides in vegetables. Vegetables (Chinese kale, cabbage and hot chili) were spiked with OP pesticides (chlorpyrifos ethion and profenofos) at different concentrations ranging from 0.5 to 100 ppm. Solid samples were prepared (based on the DESIR technique), then samples were scanned by NIR spectrophotometer at ambient temperature (25+2°C). The NIR spectra were measured as in the first section. The NIR- PLSR showed the best calibration equation for detecting low concentrations of chlorpyrifos residues in vegetables (Chinese kale, cabbage and hot chili) according to the prediction set of R2 and RMSEP of 0.85-0.93 and 8.23-11.20 ppm, respectively. For ethion residues, the best calibration equation of NIR-PLSR showed good indexes of R² and RMSEP of 0.88-0.94 and 7.68-11.20 ppm, respectively. As well as the results for profenofos pesticide, the NIR-PLSR also showed the best calibration equation for detecting the profenofos residues in vegetables according to the good index of R² and RMSEP of 0.88-0.97 and 5.25-11.00 ppm, respectively. Moreover, the calibration equation developed in this work could rapidly predict the concentrations of OP pesticides residues (0.5-100 ppm) in vegetables, and there was no significant difference between NIR-predicted values and actual values (data from GC-MS) at a confidence interval of 95%. In this work, the proposed method using NIR spectroscopy involving the DESIR technique has proved to be an efficient method for the screening detection of OP pesticides residues at low concentrations, and thus increases the food safety potential of vegetables for domestic and export markets.Keywords: NIR spectroscopy, organophosphate pesticide, vegetable, food safety
Procedia PDF Downloads 15028374 Design of Multiband Microstrip Antenna Using Stepped Cut Method for WLAN/WiMAX and C/Ku-Band Applications
Authors: Ahmed Boutejdar, Bishoy I. Halim, Soumia El Hani, Larbi Bellarbi, Amal Afyf
Abstract:
In this paper, a planar monopole antenna for multi band applications is proposed. The antenna structure operates at three operating frequencies at 3.7, 6.2, and 13.5 GHz which cover different communication frequency ranges. The antenna consists of a quasi-modified rectangular radiating patch with a partial ground plane and two parasitic elements (open-loop-ring resonators) to serve as coupling-bridges. A stepped cut at lower corners of the radiating patch and the partial ground plane are used, to achieve the multiband features. The proposed antenna is manufactured on the FR4 substrate and is simulated and optimized using High Frequency Simulation System (HFSS). The antenna topology possesses an area of 30.5 x 30 x 1.6 mm3. The measured results demonstrate that the candidate antenna has impedance bandwidths for 10 dB return loss and operates from 3.80 – 3.90 GHz, 4.10 – 5.20 GHz, 11.2 – 11.5 GHz and from 12.5 – 14.0 GHz, which meet the requirements of the wireless local area network (WLAN), worldwide interoperability for microwave access (WiMAX), C- (Uplink) and Ku- (Uplink) band applications. Acceptable agreement is obtained between measurement and simulation results. Experimental results show that the antenna is successfully simulated and measured, and the tri-band antenna can be achieved by adjusting the lengths of the three elements and it gives good gains across all the operation bands.Keywords: planar monopole antenna, FR4 substrate, HFSS, WLAN, WiMAX, C and Ku
Procedia PDF Downloads 19028373 Robust Variogram Fitting Using Non-Linear Rank-Based Estimators
Authors: Hazem M. Al-Mofleh, John E. Daniels, Joseph W. McKean
Abstract:
In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.Keywords: asymptotic relative efficiency, non-linear rank-based, rank estimates, variogram
Procedia PDF Downloads 43128372 Influence of Optimization Method on Parameters Identification of Hyperelastic Models
Authors: Bale Baidi Blaise, Gilles Marckmann, Liman Kaoye, Talaka Dya, Moustapha Bachirou, Gambo Betchewe, Tibi Beda
Abstract:
This work highlights the capabilities of particles swarm optimization (PSO) method to identify parameters of hyperelastic models. The study compares this method with Genetic Algorithm (GA) method, Least Squares (LS) method, Pattern Search Algorithm (PSA) method, Beda-Chevalier (BC) method and the Levenberg-Marquardt (LM) method. Four classic hyperelastic models are used to test the different methods through parameters identification. Then, the study compares the ability of these models to reproduce experimental Treloar data in simple tension, biaxial tension and pure shear.Keywords: particle swarm optimization, identification, hyperelastic, model
Procedia PDF Downloads 17128371 Comparative Dielectric Properties of 1,2-Dichloroethane with n-Methylformamide and n,n-Dimethylformamide Using Time Domain Reflectometry Technique in Microwave Frequency
Authors: Shagufta Tabassum, V. P. Pawar, jr., G. N. Shinde
Abstract:
The study of dielectric relaxation properties of polar liquids in the binary mixture has been carried out at 10, 15, 20 and 25 ºC temperatures for 11 different concentrations using time domain reflectometry technique. The dielectric properties of a solute-solvent mixture of polar liquids in the frequency range of 10 MHz to 30 GHz gives the information regarding formation of monomers and multimers and also an interaction between the molecules of the liquid mixture under study. The dielectric parameters have been obtained by the least squares fit method using the Debye equation characterized by a single relaxation time without relaxation time distribution.Keywords: excess properties, relaxation time, static dielectric constant, and time domain reflectometry technique
Procedia PDF Downloads 15528370 A Methodology for Optimisation of Water Containment Systems
Authors: Amir Hedjripour
Abstract:
The required dewatering configuration for a contaminated sediment dam is discussed to meet no-spill criteria for a defined Average Recurrence Interval (ARI). There is an option for the sediment dam to pump the contaminated water to another storage facility before its capacity is exceeded. The system is subjected to a range of storm durations belonging to the design ARI with concurrent dewatering to the other storage facility. The model is set up in 1-minute time intervals and temporal patterns of storm events are used to de-segregate the total storm depth into partial durations. By running the model for selected storm durations, the maximum water volume in the dam is recorded as the critical volume, which indicates the required storage capacity for that storm duration. Runoff from upstream catchment and the direct rainfall over the dam open area are calculated by taking into account the time of concentration for the catchment. Total 99 different storm durations from 5 minutes to 72 hours were modelled together with five dewatering scenarios from 50 l/s to 500 l/s. The optimised dam/pump configuration is selected by plotting critical points for all cases and storage-dewatering envelopes. A simple economic analysis is also presented in the paper using Present-Value (PV) analysis to assist with the financial evaluation of each configuration and selection of the best alternative.Keywords: contaminated water, optimisation, pump, sediment dam
Procedia PDF Downloads 36928369 Evaluating the Influence of Financial Technology (FinTech) on Sustainable Finance: A Comprehensive Global Analysis
Authors: Muhammad Kashif
Abstract:
The primary aim of this paper is to investigate the influence of financial technology (FinTech) on sustainable finance. The sample for this study spans from 2010 to 2021, encompassing data from 89 countries worldwide. The study employed two-stage least squares (2SLS) regression approach with the instrumental variables and validated the findings using a two-step system generalized method of moments (GMM). The findings indicate that fintech has a significant favorable impact on sustainable finance. While other factors such as institutional quality, socio-economic condition, and renewable energy have a significant and beneficial influence on the trajectory of sustainable finance, except globalization's impact is positive but insignificant. Furthermore, fintech is crucial in driving the transition toward a sustainable future characterized by a lower carbon economy. The study found that fintech has extensive application across various sectors of sustainable finance and has substantial potential to create long-term positive effects on sustainable finance. Fintech can integrate extensively with other technologies to facilitate diversified growth in sustainable finance. Additionally, this study highlights fintech-related trends and research opportunities in sustainable finance, showing how these can promote each other worldwide with important policy implications for countries looking to advance sustainable finance through technology.Keywords: sustainable development goals (SDGs), financial technology (FinTech), genuine savings index (GSI), financial stability index, sustainable finance
Procedia PDF Downloads 13428368 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics
Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca
Abstract:
The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.Keywords: adulteration, multivariate analysis, potential functions, regression
Procedia PDF Downloads 125