Search results for: inverse category
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1241

Search results for: inverse category

971 Enhanced Dielectric Properties of La Substituted CoFe2O4 Magnetic Nanoparticles

Authors: M. Vadivel, R. Ramesh Babu

Abstract:

Spinel ferrite magnetic nanomaterials have received a great deal of attention in recent years due to their wide range of potential applications in various fields such as magnetic data storage and microwave device applications. Among the family of spinel ferrites, cobalt ferrite (CoFe2O4) has been widely used in the field of high-frequency applications because of its remarkable material qualities such as moderate saturation magnetization, high coercivity, large permeability at higher frequency and high electrical resistivity. For aforementioned applications, the materials should have an improved electrical property, especially enhancement in the dielectric properties. It is well known that the substitution of rare earth metal cations in Fe3+ site of CoFe2O4 nanoparticles leads to structural distortion and thus significantly influences the structural and morphological properties whereas greatly modifies the electrical and magnetic properties of a material. In the present investigation, we report on the influence of lanthanum (La3+) ion substitution on the structural, morphological, dielectric and magnetic properties of CoFe2O4 magnetic nanoparticles prepared by co-precipitation method. Powder X-ray diffraction patterns reveal the formation of inverse cubic spinel structure with the signature of LaFeO3 phase at higher La3+ ion concentrations. Raman and Fourier transform infrared spectral analysis also confirms the formation of inverse cubic spinel structure and Fe-O symmetrical stretching vibrations of CoFe2O4 nanoparticles, respectively. Transmission electron microscopy study reveals that the size of the particles gradually increases with increasing La3+ ion concentrations whereas the agglomeration gets slightly reduced for La3+ ion substituted CoFe2O4 nanoparticles than that of undoped CoFe2O4 nanoparticles. Dielectric properties such as dielectric constant and dielectric loss were recorded as a function of frequency and temperature which reveals that the dielectric constant gradually increases with increasing temperatures as well as La3+ ion concentrations. The increased dielectric constant might be the reason that the formation of LaFeO3 secondary phase at higher La3+ ion concentrations. Magnetic measurement demonstrates that the saturation magnetization gradually decreases from 61.45 to 25.13 emu/g with increasing La3+ ion concentrations which is due to the nonmagnetic nature of La3+ ions substitution.

Keywords: cobalt ferrite, co-precipitation, dielectric properties, saturation magnetization

Procedia PDF Downloads 290
970 Analysis of Human Toxicity Potential of Major Building Material Production Stage Using Life Cycle Assessment

Authors: Rakhyun Kim, Sungho Tae

Abstract:

Global environmental issues such as abnormal weathers due to global warming, resource depletion, and ecosystem distortions have been escalating due to rapid increase of population growth, and expansion of industrial and economic development. Accordingly, initiatives have been implemented by many countries to protect the environment through indirect regulation methods such as Environmental Product Declaration (EPD), in addition to direct regulations such as various emission standards. Following this trend, life cycle assessment (LCA) techniques that provide quantitative environmental information, such as Human Toxicity Potential (HTP), for buildings are being developed in the construction industry. However, at present, the studies on the environmental database of building materials are not sufficient to provide this support adequately. The purpose of this study is to analysis human toxicity potential of major building material production stage using life cycle assessment. For this purpose, the theoretical consideration of the life cycle assessment and environmental impact category was performed and the direction of the study was set up. That is, the major material in the global warming potential view was drawn against the building and life cycle inventory database was selected. The classification was performed about 17 kinds of substance and impact index, such as human toxicity potential, that it specifies in CML2001. The environmental impact of analysis human toxicity potential for the building material production stage was calculated through the characterization. Meanwhile, the environmental impact of building material in the same category was analyze based on the characterization impact which was calculated in this study. In this study, establishment of environmental impact coefficients of major building material by complying with ISO 14040. Through this, it is believed to effectively support the decisions of stakeholders to improve the environmental performance of buildings and provide a basis for voluntary participation of architects in environment consideration activities.

Keywords: human toxicity potential, major building material, life cycle assessment, production stage

Procedia PDF Downloads 109
969 Static Study of Piezoelectric Bimorph Beams with Delamination Zone

Authors: Zemirline Adel, Ouali Mohammed, Mahieddine Ali

Abstract:

The FOSDT (First Order Shear Deformation Theory) is taking into consideration to study the static behavior of a bimorph beam, with a delamination zone between the upper and the lower layer. The effect of limit conditions and lengths of the delamination zone are presented in this paper, with a PVDF piezoelectric material application. A FEM “Finite Element Method” is used to discretize the beam. In the axial displacement, a displacement field appears in the debonded zone with inverse effect between the upper and the lower layer was observed.

Keywords: static, piezoelectricity, beam, delamination

Procedia PDF Downloads 393
968 In-situ Observations Using SEM-EBSD for Bending Deformation in Single-Crystal Materials

Authors: Yuko Matayoshi, Takashi Sakai, Yin-Gjum Jin, Jun-ichi Koyama

Abstract:

To elucidate the material characteristics of single crystals of pure aluminum and copper, the respective relations between crystallographic orientations and micro structures were examined, along with bending and mechanical properties. The texture distribution was also analysed. Bending tests were performed in a SEM apparatus while its behaviors were observed. Some analytical results related to crystal direction maps, inverse pole figures, and textures were obtained from electron back scatter diffraction (EBSD) analyses.

Keywords: pure aluminum, pure copper, single crystal, bending, SEM-EBSD analysis, texture, microstructure

Procedia PDF Downloads 347
967 Observation of Inverse Blech Length Effect during Electromigration of Cu Thin Film

Authors: Nalla Somaiah, Praveen Kumar

Abstract:

Scaling of transistors and, hence, interconnects is very important for the enhanced performance of microelectronic devices. Scaling of devices creates significant complexity, especially in the multilevel interconnect architectures, wherein current crowding occurs at the corners of interconnects. Such a current crowding creates hot-spots at the respective corners, resulting in non-uniform temperature distribution in the interconnect as well. This non-uniform temperature distribution, which is exuberated with continued scaling of devices, creates a temperature gradient in the interconnect. In particular, the increased current density at corners and the associated temperature rise due to Joule heating accelerate the electromigration induced failures in interconnects, especially at corners. This has been the classic reliability issue associated with metallic interconnects. Herein, it is generally understood that electromigration induced damages can be avoided if the length of interconnect is smaller than a critical length, often termed as Blech length. Interestingly, the effect of non-negligible temperature gradients generated at these corners in terms of thermomigration and electromigration-thermomigration coupling has not attracted enough attention. Accordingly, in this work, the interplay between the electromigration and temperature gradient induced mass transport was studied using standard Blech structure. In this particular sample structure, the majority of the current is forcefully directed into the low resistivity metallic film from a high resistivity underlayer film, resulting in current crowding at the edges of the metallic film. In this study, 150 nm thick Cu metallic film was deposited on 30 nm thick W underlayer film in the configuration of Blech structure. Series of Cu thin strips, with lengths of 10, 20, 50, 100, 150 and 200 μm, were fabricated. Current density of ≈ 4 × 1010 A/m² was passed through Cu and W films at a temperature of 250ºC. Herein, along with expected forward migration of Cu atoms from the cathode to the anode at the cathode end of the Cu film, backward migration from the anode towards the center of Cu film was also observed. Interestingly, smaller length samples consistently showed enhanced migration at the cathode end, thus indicating the existence of inverse Blech length effect in presence of temperature gradient. A finite element based model showing the interplay between electromigration and thermomigration driving forces has been developed to explain this observation.

Keywords: Blech structure, electromigration, temperature gradient, thin films

Procedia PDF Downloads 235
966 Dielectric Response Analysis Measurement for Diagnostic Oil-Paper Insulation System on Aged Inter Bus Transformer 3x10 MVA

Authors: Eki Farlen, Akas

Abstract:

Condition assessment of oil-paper-insulated power transformers, particularly of water content, is becoming increasingly important for aged transformers. As insulation ages, it can produce water, which reduces its dielectric strength, accelerates the cellulose ageing process, and causes gas bubbles to form at high temperatures. This paper mainly assesses the life condition of oil-paper insulation system of Inter Bus Transformer (IBT) 30 MVA, 150/30 kV in PT PLN-Substation Jelok that has been operating for 41 years, since 1974. Valuable information about the condition of high voltage insulation may be obtained by measuring its dielectric response. This paper describes in detail the interpretation of Dielectric Response Analysis (DIRANA) measurements and the test result compared to other insulation tests to get deep information for diagnostic, such as Tan delta test, oil characteristic test and Dissolve Gas Analysis (DGA) test. This paper mainly discusses the parameter relationship between moisture content, water content, acidity, oil conductivity and dissipation factor. The result and analysis show that IBT 30 MVA Jelok phase U and W had just been ageing due to high acidity level (>0.2 mgKOH/g) which cause high moisture in cellulose/paper (%) are in wet category about 4.7% and 5% and water content in oil (ppm) about 3.13 ppm and 3.33 ppm at temperature 20°C. High acidity level can make oxidation process and produce water in paper and particle which can decrease the value of Interfacial Tension (IFT) below 22 mN/m (poor category) for both phase U and W. Even if paper insulation of transformer are in wet condition, dissipation factor and capacitance at the same frequency (50 Hz) from both measurement DIRANA test and Tangent delta test give the same result (almost), the results are 0.69% and 0.71% (<1%), it may be acceptable and should not be investigated. The DGA results show that TDCG are in level one (1) condition and there are no found a Key Gases, it means that transformers had no failure during operation like arching, partial discharge and thermal in oil or cellulose.

Keywords: diagnostic, inter-bus transformer, oil-paper insulation, moisture, dissipation factor

Procedia PDF Downloads 260
965 Lexical Knowledge of Verb Particle Constructions with the Particle on by Mexican English Learners

Authors: Sarai Alvarado Pineda, Ricardo Maldonado Soto

Abstract:

The acquisition of Verb Particle Constructions is a challenge for Spanish speakers learning English. The acquisition is particularly difficult for speakers of languages with no verb particle constructions. The purpose of the current study is to define the procedural steps in the acquisition of constructions with the particle on. There are three outstanding meanings for the particle on; Surface: The movie is based on a true story, Activation: John turn on the light, Continuity: The band played on all night. The central aim of this study is to measure how Mexican Spanish participants respond to both the three meanings mentioned above and the degree of meaning transparency/opacity of on verb particle constructions. Forty Mexican Spanish learners of English (20 basic and 20 advanced) are compared against a control group of 20 American native English speakers through a reaction time test (PsychoPy2 2015). The participants were asked to discriminate 90 items based on their knowledge of these constructions. There are 30 items per meaning divided into two groups of transparent and opaque meaning. Results revealed three major findings: Advanced students have a reaction time similar to that of native speakers (advanced 4.5s versus native 3.7s), while students with a lower level of English proficiency, show a high reaction time (7s). Likewise, there is a shorter reaction time in constructions with lower opacity in the three groups of participants, with differences between each level (basic 6.7s, advanced 4.3s, and native 3.4s). Finally, a difference in reaction time can be identified according to the meaning provided by the construction. The reaction time for the activation category (5.27s) is greater than continuity (5.04s), and this category is also slower than the surface (4.94s). The study shows that the level of sensitivity of English learners increases significantly aiming towards native speaker patterns as determined by the level of transparency of meaning of each construction as well as the degree of entrenchment of each constructional meaning.

Keywords: meaning of the particle, opacity, reaction time, verb particle constructions

Procedia PDF Downloads 243
964 Efficient Filtering of Graph Based Data Using Graph Partitioning

Authors: Nileshkumar Vaishnav, Aditya Tatu

Abstract:

An algebraic framework for processing graph signals axiomatically designates the graph adjacency matrix as the shift operator. In this setup, we often encounter a problem wherein we know the filtered output and the filter coefficients, and need to find out the input graph signal. Solution to this problem using direct approach requires O(N3) operations, where N is the number of vertices in graph. In this paper, we adapt the spectral graph partitioning method for partitioning of graphs and use it to reduce the computational cost of the filtering problem. We use the example of denoising of the temperature data to illustrate the efficacy of the approach.

Keywords: graph signal processing, graph partitioning, inverse filtering on graphs, algebraic signal processing

Procedia PDF Downloads 284
963 Diversity and Quality of Food Consumption Compared to Nutritional Status in Ages 15–17 Years Old in Jakarta

Authors: Andra Vidyarini

Abstract:

Adolescence is a transition period in which various changes occur, both biologically, intellectually and psychosocially. Changes in adolescents, one of which is a change in food consumption patterns that make adolescents vulnerable to nutritional problems that can affect their growth and health in the future. Nutritional problems in adolescents have increased from year to year and one of the causes is the low diversity and quality of consumption. The diversity and quality of consumption can be seen through the Individual Dietary Diversity Score and the Healthy Eating Index. Currently, in Indonesia, data on the diversity and quality of food consumption, especially among adolescents, are still scarce. In general, the purpose of this study is to describe the diversity and quality of adolescent food consumption and the relationship between the diversity and quality of food consumption with nutritional status. This study is a cross-sectional study by looking at the diversity and quality of consumption of adolescents aged 15-17 years. The total number of subjects in this study amounted to 70 teenagers. This research was conducted online via a google form. Data analysis in this study was univariate and bivariate. The results showed that the diversity of the subject's food consumption was in the diverse and very diverse category with an average of 6. However, the quality was still not good, whereas it was still in the bad and moderate categories with an average of 12.93. The nutritional status of the majority of the subjects was in the normal category and overweight to obese. The implementation of blended learning where there are still limited face-to-face meetings at school can be the reason why teenagers' food consumption is more diverse than when they are face-to-face schools. In addition, changes in people's diet during the pandemic also influenced the results of the study. The change in pattern is a change in eating habits to three times a day with menu choices ranging from rice, meat, fish, bean and vegetables. Analysis of the relationship between the diversity and quality of food consumption shows that the diversity of consumption has a significant relationship with the quality of food consumption with a p-value of 0.002 (p<0.05). Meanwhile, the diversity and quality of food consumption have no significant relationship with nutritional status, with p values 0.777 and 0.251 (>0.05), respectively. This shows that the diversity of food consumption is directly proportional to the quality of consumption, where if you have a variety of food consumption, the quality or in terms of portions and weight are also sufficient in accordance with the recommendations of PGRS.

Keywords: healthy eating index (HEI), food diversity, quality of consumption, adolescent

Procedia PDF Downloads 145
962 Classroom Interaction Patterns as Correlates of Senior Secondary School Achievement in Chemistry in Awka Education Zone

Authors: Emmanuel Nkemakolam Okwuduba, Fransica Chinelo Offiah

Abstract:

The technique of teaching chemistry to students is one of the determining factors towards their achievement. Thus, the study investigated the relationship between classroom interaction patterns and students’ achievement in Chemistry. The purpose of this study was to identify patterns of interaction in an observed chemistry classroom, determine the amount of teacher talk, student talk and period of silence and to find out the relationship between them and the mean achievement scores of students. Five research questions and three hypotheses guided the study. The study was a correlational survey. The sample consisted of 450 (212males and 238 females) senior secondary one students and 12 (5males and 7 females) chemistry teachers drawn from 12 selected secondary schools in Awka Education Zone of Anambra state. In each of the 12 selected schools, an intact class was used. Science Interaction Category (SIC) and Chemistry Achievement Test (CAT) were developed, validated and used for data collection. Each teacher was observed three times and the interaction patterns coded using a coding sheet containing the Science Interaction Category. At the end of the observational period, the Chemistry Achievement Test (for collection of data on students’ achievement in chemistry) was administered on the students. Frequencies, percentage, mean, standard deviation and Pearson product moment correlation were used for data analysis. The result showed that the percentages of teacher talk, student talk and silence were 59.6%, 37.6% and 2.8% respectively. The Pearson correlation coefficient(r) for teacher talk, student talk and silence were -0.61, 0.76 and-0.18 respectively. The result showed negative and significant relationship between teacher talk and mean achievement scores of students; positive and significant relationship between student talk and mean achievement scores of students but there is no relationship between period of silence and mean achievement scores of students at 0.05 significant levels. The following recommendations were made based on the findings: teachers should establish high level of student talk through initiation and response as it promotes involvement and enhances achievement.

Keywords: academic achievement, chemistry, classroom, interactions patterns

Procedia PDF Downloads 285
961 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities

Authors: Evgeniya V. Gavrilova, Sofya S. Belova

Abstract:

The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.

Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing

Procedia PDF Downloads 212
960 Investigating the Relationship between Growth, Beta and Liquidity

Authors: Zahra Amirhosseini, Mahtab Nameni

Abstract:

The aim of this study was to investigate the relationship between growth, beta, and Company's cash. We calculate cash as dependent variable and growth opportunity and beta as independent variables. This study was based on an analysis of panel data. Population of the study is the companies which listed in Tehran Stock exchange and a financial data of 215 companies during the period 2010 to 2015 have been selected as the sample through systematic sampling. The results of the first hypothesis showed there is a significant relationship between growth opportunities cash holdings. Also according to the analysis done in the second hypothesis, we determined that there is an inverse relation between company risk and cash holdings.

Keywords: growth, beta, liquidity, company

Procedia PDF Downloads 366
959 Box Counting Dimension of the Union L of Trinomial Curves When α ≥ 1

Authors: Kaoutar Lamrini Uahabi, Mohamed Atounti

Abstract:

In the present work, we consider one category of curves denoted by L(p, k, r, n). These curves are continuous arcs which are trajectories of roots of the trinomial equation zn = αzk + (1 − α), where z is a complex number, n and k are two integers such that 1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting by L the union of all trinomial curves L(p, k, r, n) and using the box counting dimension as fractal dimension, we will prove that the dimension of L is equal to 3/2.

Keywords: feasible angles, fractal dimension, Minkowski sausage, trinomial curves, trinomial equation

Procedia PDF Downloads 160
958 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand

Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat

Abstract:

Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.

Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting

Procedia PDF Downloads 170
957 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications

Authors: Xianwei Zheng, Yuan Yan Tang

Abstract:

Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.

Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis

Procedia PDF Downloads 315
956 LCA and LCC for the Evaluation of Sustainability of Rapeseed, Giant Reed, and Poplar Cultivation

Authors: Alessandro Suardi, Rodolfo Picchio, Domenico Coaloa, Maria Bonaventura Forleo, Nadia Palmieri, Luigi Pari

Abstract:

The reconversion process of the Italian sugar supply chain to bio-energy supply chains, as a result of the 2006 Sugar CMO reform, have involved research to define the best logistics, the most adapted energy crops for the Italian territory and their sustainability. Rapeseed (Brassica napus L.), Giant reed (Arundo donax L.) and Poplar (Poplar ssp.) are energy crops considered strategic for the development of Italian energy supply-chains. This study analyzed the environmental and the economic impacts on the farm level of these three energy crops. The environmental assessment included six farming units, two per crop, which were extracted from a sample of 251 rapeseed farm units (2751 ha), 7 giant reed farm units (7.8 ha), and 91 poplar farm units (440 ha) using a statistical multivariate analysis. Life Cycle Assessment (LCA) research method has been used to evaluate and compare the sustainability of the agricultural phases of the crops studied. The impact analyses have been performed at mid-point and end-point levels. The results of the analysis shown that the fertilization, is the major source of environmental impact of the agricultural phase due to the production of the fertilizers and the soil emissions of GHG following the treatment. The perennial energy crops studied (Arundo donax L., Poplar ssp.) were environmentally more sustainable if compared with the annual crop (Brassica napus L.) for all the impact categories at mid-point and end-point levels analyzed. The most relevant impact category influenced by the agricultural process result the fossil depletion, mainly due to the fossil fuels consumed during the mineral fertilizers production (urea). Human health was the most affected damage category at the end point level. Poplar result the energy crop with the best environmental performance for the Italian territory, in the distribution areas most suitable for its cultivation.

Keywords: LCA, energy crops, rapeseed, giant reed, poplar

Procedia PDF Downloads 458
955 Applications of Probabilistic Interpolation via Orthogonal Matrices

Authors: Dariusz Jacek Jakóbczak

Abstract:

Mathematics and computer science are interested in methods of 2D curve interpolation and extrapolation using the set of key points (knots). A proposed method of Hurwitz- Radon Matrices (MHR) is such a method. This novel method is based on the family of Hurwitz-Radon (HR) matrices which possess columns composed of orthogonal vectors. Two-dimensional curve is interpolated via different functions as probability distribution functions: polynomial, sinus, cosine, tangent, cotangent, logarithm, exponent, arcsin, arccos, arctan, arcctg or power function, also inverse functions. It is shown how to build the orthogonal matrix operator and how to use it in a process of curve reconstruction.

Keywords: 2D data interpolation, hurwitz-radon matrices, MHR method, probabilistic modeling, curve extrapolation

Procedia PDF Downloads 502
954 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method

Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang

Abstract:

Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.

Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series

Procedia PDF Downloads 221
953 Analyzing Competition in Public Construction Projects

Authors: Khaled Hesham Hyari, Amjad Almani

Abstract:

Construction projects in the public sector are commonly awarded through competitive bidding. In the last decade, the Construction projects environment in the Middle East went through many changes. These changes have been caused by different factors including the economic crisis, delays in monthly payments, international competition and reduced number of projects. These factors had a great impact on the bidding behaviors of contractors and their pricing strategies. This paper examines the competition characteristics in public construction projects through an analysis of bidding results of contractors in public construction projects over a period of 6 years (2006-2011) in Jordan. The analyzed projects include all categories of projects such as infrastructure, buildings, transportation and engineering services (design and supervision contracts). Data for the projects were obtained from the General Tender’s Directorate in Jordan and includes 462 projects. The analysis performed in this projects includes, studying the bid spread in all projects as it is an indication of the level of competition in the analyzed bids. The analysis studied the factors that affect bid spread such as number of bidders, Value of the project, Project category and years. It also studying the “Signal to Noise Ratio” in all projects as it is an indication of the accuracy of cost estimating performed by competing bidders and bidder´s evaluation of project risks. The analysis performed includes the relationship between signal to noise ratio and different parameters such as project category, number of bidders and changes over years. Moreover, the analysis includes determining the bidder´s aggressiveness in bidding as it is an indication of competition level in such projects. This was performed by determining the pack price which can be considered as the true value of the project and comparing it with the lowest bid submitted for each project to determine the level of aggressiveness in submitted bids. The analysis performed in this project should prove to be useful to owners in understanding bidding behaviors of contractors and pointing out areas that needs improvement in preparing bidding documents. Also the project should be useful to contractors in understanding the competitive bidding environment and should help them to improve their bidding strategies to maximize the success rate in obtaining contracts.

Keywords: construction projects, competitive bidding, public construction, competition

Procedia PDF Downloads 311
952 An Optimal Perspective on Research in Translation Studies

Authors: Andrea Musumeci

Abstract:

General theory of translation has suffered the lack of a homogeneous academic dialect, a holistic methodology to account for the diversity of factors involved in the discipline. An underlying pattern amongst theories of translation belonging to different periods and schools has been identified. Such pattern, which is linguistics oriented, could play a role towards unified academic and professional environments, both in terms of research and as a professional category. The implementation of such an approach has also led to a critique of the concept of equivalence, as being not the best of ways to account for translating phenomena.

Keywords: optimal, translating, research translation theory, methodology, descriptive analysis

Procedia PDF Downloads 596
951 Experimental Investigation and Numerical Simulations of the Cylindrical Machining of a Ti-6Al-4V Tree

Authors: Mohamed Sahli, David Bassir, Thierry Barriere, Xavier Roizard

Abstract:

Predicting the behaviour of the Ti-6Al-4V alloy during the turning operation was very important in the choice of suitable cutting tools and also in the machining strategies. In this study, a 3D model with thermo-mechanical coupling has been proposed to study the influence of cutting parameters and also lubrication on the performance of cutting tools. The constants of the constitutive Johnson-Cook model of Ti-6Al-4V alloy were identified using inverse analysis based on the parameters of the orthogonal cutting process. Then, numerical simulations of the finishing machining operation were developed and experimentally validated for the cylindrical stock removal stage with the finishing cutting tool.

Keywords: titanium turning, cutting tools, FE simulation, chip

Procedia PDF Downloads 155
950 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 515
949 Dietary Patterns and Hearing Loss in Older People

Authors: N. E. Gallagher, C. E. Neville, N. Lyner, J. Yarnell, C. C. Patterson, J. E. Gallacher, Y. Ben-Shlomo, A. Fehily, J. V. Woodside

Abstract:

Hearing loss is highly prevalent in older people and can reduce quality of life substantially. Emerging research suggests that potentially modifiable risk factors, including risk factors previously related to cardiovascular disease risk, may be associated with a decreased or increased incidence of hearing loss. This has prompted investigation into the possibility that certain nutrients, foods or dietary patterns may also be associated with incidence of hearing loss. The aim of this study was to determine any associations between dietary patterns and hearing loss in men enrolled in the Caerphilly study. The Caerphilly prospective cohort study began in 1979-1983 with recruitment of 2512 men aged 45-59 years. Dietary data was collected using a self-administered, semi-quantitative, 56-item food frequency questionnaire (FFQ) at baseline (1979-1983), and 7-day weighed food intake (WI) in a 30% sub-sample, while pure-tone unaided audiometric threshold was assessed at 0.5, 1, 2 and 4 kHz, between 1984 and 1988. Principal components analysis (PCA) was carried out to determine a posteriori dietary patterns and multivariate linear and logistic regression models were used to examine associations with hearing level (pure tone average (PTA) of frequencies 0.5, 1, 2 and 4 kHz in decibels (dB)) for linear regression and with hearing loss (PTA>25dB) for logistic regression. Three dietary patterns were determined using PCA on the FFQ data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, both linear and logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P<0.001) and linear regression analysis showed a significant association between the High sugar/Alcohol avoider pattern and hearing loss (P=0.04). Three similar dietary patterns were determined using PCA on the WI data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P=0.02) and a significant association between the Traditional pattern and hearing loss (P=0.04). A Healthy dietary pattern was found to be significantly inversely associated with hearing loss in middle-aged men in the Caerphilly study. Furthermore, a High sugar/Alcohol avoider pattern (FFQ) and a Traditional pattern (WI) were associated with poorer hearing levels. Consequently, the role of dietary factors in hearing loss remains to be fully established and warrants further investigation.

Keywords: ageing, diet, dietary patterns, hearing loss

Procedia PDF Downloads 211
948 Multi-Focus Image Fusion Using SFM and Wavelet Packet

Authors: Somkait Udomhunsakul

Abstract:

In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.

Keywords: multi-focus image fusion, wavelet packet, spatial frequency measurement

Procedia PDF Downloads 457
947 The Complexity of Testing Cryptographic Devices on Input Faults

Authors: Alisher Ikramov, Gayrat Juraev

Abstract:

The production of logic devices faces the occurrence of faults during manufacturing. This work analyses the complexity of testing a special type of logic device on inverse, adhesion, and constant input faults. The focus of this work is on devices that implement cryptographic functions. The complexity values for the general case faults and for some frequently occurring subsets were determined and proved in this work. For a special case, when the length of the text block is equal to the length of the key block, the complexity of testing is proven to be asymptotically half the complexity of testing all logic devices on the same types of input faults.

Keywords: complexity, cryptographic devices, input faults, testing

Procedia PDF Downloads 200
946 Critical Success Factors Influencing Construction Project Performance for Different Objectives: Procurement Phase

Authors: Samart Homthong, Wutthipong Moungnoi

Abstract:

Critical success factors (CSFs) and the criteria to measure project success have received much attention over the decades and are among the most widely researched topics in the context of project management. However, although there have been extensive studies on the subject by different researchers, to date, there has been little agreement on the CSFs. The aim of this study is to identify the CSFs that influence the performance of construction projects, and determine their relative importance for different objectives across five stages in the project life cycle. A considerable literature review was conducted that resulted in the identification of 179 individual factors. These factors were then grouped into nine major categories. A questionnaire survey was used to collect data from three groups of respondents: client representatives, consultants, and contractors. Out of 164 questionnaires distributed, 93 were returned, yielding a response rate of 56.7%. Using the mean score, relative importance index, and weighted average method, the top 10 critical factors for each category were identified. The agreement of survey respondents on those categorised factors were analysed using Spearman’s rank correlation. A one-way analysis of variance was then performed to determine whether the mean scores among the various groups of respondents were statistically significant. The findings indicate the most CSFs in each category in procurement phase are: proper procurement programming of materials (time), stability in the price of materials (cost), and determining quality in the construction (quality). They are then followed by safety equipment acquisition and maintenance (health and safety), budgeting allowed in a contractual arrangement for implementing environmental management activities (environment), completeness of drawing documents (productivity), accurate measurement and pricing of bill of quantities (risk management), adequate communication among the project team (human resource), and adequate cost control measures (client satisfaction). An understanding of CSFs would help all interested parties in the construction industry to improve project performance. Furthermore, the results of this study would help construction professionals and practitioners take proactive measures for effective project management.

Keywords: critical success factors, procurement phase, project life cycle, project performance

Procedia PDF Downloads 166
945 Spatial Distribution of Heavy Metals in Khark Island-Iran Using Geographic Information System

Authors: Abbas Hani, Maryam Jassasizadeh

Abstract:

The concentrations of Cd, Pb, and Ni were determined from 40 soil samples collected in surface soils of Khark Island. Geostatistic methods and GIS were used to identify heavy metal sources and their spatial pattern. Principal component analysis coupled with correlation between heavy metals showed that level of mentioned heavy metal was lower than the standard level. Then the data obtained from the soil analyzing were studied for the purposes of normal distribution. The best way of interior finding for cadmium and nickel was ordinary kriging and the best way of interpolation of lead was inverse distance weighted. The result of this study help us to understand heavy metals distribution and make decision for remediation of soil pollution.

Keywords: geostatistics, ordinary kriging, heavy metals, GIS, Khark

Procedia PDF Downloads 141
944 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 78
943 Serious Digital Video Game for Solving Algebraic Equations

Authors: Liliana O. Martínez, Juan E González, Manuel Ramírez-Aranda, Ana Cervantes-Herrera

Abstract:

A serious game category mobile application called Math Dominoes is presented. The main objective of this applications is to strengthen the teaching-learning process of solving algebraic equations and is based on the board game "Double 6" dominoes. Math Dominoes allows the practice of solving first, second-, and third-degree algebraic equations. This application is aimed to students who seek to strengthen their skills in solving algebraic equations in a dynamic, interactive, and fun way, to reduce the risk of failure in subsequent courses that require mastery of this algebraic tool.

Keywords: algebra, equations, dominoes, serious games

Procedia PDF Downloads 104
942 Effects of Age and Energy Expenditure on Obesity Among Adults in Abeokuta, Nigeria

Authors: Adeniyi Samuel Adekoya

Abstract:

The study assessed the independent effects of age and energy expenditure on the risks of obesity among adults (20-64 years). A cross-sectional study with changes in age, changes in work and leisure-time, and physical activities information played roles, with cut-off for energy expenditure and BMI in rural and urban localities. Physical activity information determined the energy expenditure, while the BMI determined the risk of obesity among the subjects. Statistically, age has a strong and direct association with obesity in both rural and urban settings, while energy expenditure was inverse in its association. Findings from the this study showed that in developing societies, age tends to be a risk factor for obesity, whereas energy expenditure is to be protective. Level of education and economic development are also relevant modifiers of the influences exerted by these variables.

Keywords: age, energy expenditure, BMI, rural/urban

Procedia PDF Downloads 400