Search results for: complexity measurement
2167 Transfer of Constraints or Constraints on Transfer? Syntactic Islands in Danish L2 English
Authors: Anne Mette Nyvad, Ken Ramshøj Christensen
Abstract:
In the syntax literature, it has standardly been assumed that relative clauses and complement wh-clauses are islands for extraction in English, and that constraints on extraction from syntactic islands are universal. However, the Mainland Scandinavian languages has been known to provide counterexamples. Previous research on Danish has shown that neither relative clauses nor embedded questions are strong islands in Danish. Instead, extraction from this type of syntactic environment is degraded due to structural complexity and it interacts with nonstructural factors such as the frequency of occurrence of the matrix verb, the possibility of temporary misanalysis leading to semantic incongruity and exposure over time. We argue that these facts can be accounted for with parametric variation in the availability of CP-recursion, resulting in the patterns observed, as Danish would then “suspend” the ban on movement out of relative clauses and embedded questions. Given that Danish does not seem to adhere to allegedly universal syntactic constraints, such as the Complex NP Constraint and the Wh-Island Constraint, what happens in L2 English? We present results from a study investigating how native Danish speakers judge extractions from island structures in L2 English. Our findings suggest that Danes transfer their native language parameter setting when asked to judge island constructions in English. This is compatible with the Full Transfer Full Access Hypothesis, as the latter predicts that Danish would have difficulties resetting their [+/- CP-recursion] parameter in English because they are not exposed to negative evidence.Keywords: syntax, islands, second language acquisition, danish
Procedia PDF Downloads 1242166 Analysis of Labor Effectiveness at Green Tea Dry Sorting Workstation for Increasing Tea Factory Competitiveness
Authors: Bayu Anggara, Arita Dewi Nugrahini, Didik Purwadi
Abstract:
Dry sorting workstation needs labor to produce green tea in Gambung Tea Factory. Observation results show that there is labor who are not working at the moment and doing overtime jobs to meet production targets. The measurement of the level of labor effectiveness has never been done before. The purpose of this study is to determine the level of labor effectiveness and provide recommendations for improvement based on the results of the Pareto diagram and Ishikawa diagram. The method used to measure the level of labor effectiveness is Overall Labor Effectiveness (OLE). OLE had three indicators which are availability, performance, and quality. Recommendations are made based on the results of the Pareto diagram and Ishikawa diagram for indicators that do not meet world standards. Based on the results of the study, the OLE value was 68.19%. Recommendations given to improve labor performance are adding mechanics, rescheduling rest periods, providing special training for labor, and giving rewards to labor. Furthermore, the recommendations for improving the quality of labor are procuring water content measuring devices, create material standard policies, and rescheduling rest periods.Keywords: Ishikawa diagram, labor effectiveness, OLE, Pareto diagram
Procedia PDF Downloads 2262165 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 472164 Intuitive Decision Making When Facing Risks
Authors: Katharina Fellnhofer
Abstract:
The more information and knowledge that technology provides, the more important are profoundly human skills like intuition, the skill of using nonconscious information. As our world becomes more complex, shaken by crises, and characterized by uncertainty, time pressure, ambiguity, and rapidly changing conditions, intuition is increasingly recognized as a key human asset. However, due to methodological limitations of sample size or time frame or a lack of real-world or cross-cultural scope, precisely how to measure intuition when facing risks on a nonconscious level remains unclear. In light of the measurement challenge related to intuition’s nonconscious nature, a technique is introduced to measure intuition via hidden images as nonconscious additional information to trigger intuition. This technique has been tested in a within-subject fully online design with 62,721 real-world investment decisions made by 657 subjects in Europe and the United States. Bayesian models highlight the technique’s potential to measure skill at using nonconscious information for conscious decision making. Over the long term, solving the mysteries of intuition and mastering its use could be of immense value in personal and organizational decision-making contexts.Keywords: cognition, intuition, investment decisions, methodology
Procedia PDF Downloads 852163 Identifying and Ranking Environmental Risks of Oil and Gas Projects Using the VIKOR Method for Multi-Criteria Decision Making
Authors: Sasan Aryaee, Mahdi Ravanshadnia
Abstract:
Naturally, any activity is associated with risk, and humans have understood this concept from very long times ago and seek to identify its factors and sources. On the one hand, proper risk management can cause problems such as delays and unforeseen costs in the development projects, temporary or permanent loss of services, getting lost or information theft, complexity and limitations in processes, unreliable information caused by rework, holes in the systems and many such problems. In the present study, a model has been presented to rank the environmental risks of oil and gas projects. The statistical population of the study consists of all executives active in the oil and gas fields, that the statistical sample is selected randomly. In the framework of the proposed method, environmental risks of oil and gas projects were first extracted, then a questionnaire based on these indicators was designed based on Likert scale and distributed among the statistical sample. After assessing the validity and reliability of the questionnaire, environmental risks of oil and gas projects were ranked using the VIKOR method of multiple-criteria decision-making. The results showed that the best options for HSE planning of oil and gas projects that caused the reduction of risks and personal injury and casualties and less than other options is costly for the project and it will add less time to the duration of implementing the project is the entering of dye to the environment when painting the generator pond and the presence of the rigger near the crane.Keywords: ranking, multi-criteria decision making, oil and gas projects, HSEmanagement, environmental risks
Procedia PDF Downloads 1562162 Reduce the Impact of Wildfires by Identifying Them Early from Space and Sending Location Directly to Closest First Responders
Authors: Gregory Sullivan
Abstract:
The evolution of global warming has escalated the number and complexity of forest fires around the world. As an example, the United States and Brazil combined generated more than 30,000 forest fires last year. The impact to our environment, structures and individuals is incalculable. The world has learned to try to take this in stride, trying multiple ways to contain fires. Some countries are trying to use cameras in limited areas. There are discussions of using hundreds of low earth orbit satellites and linking them together, and, interfacing them through ground networks. These are all truly noble attempts to defeat the forest fire phenomenon. But there is a better, simpler answer. A bigger piece of the solutions puzzle is to see the fires while they are small, soon after initiation. The approach is to see the fires while they are very small and report their location (latitude and longitude) to local first responders. This is done by placing a sensor at geostationary orbit (GEO: 26,000 miles above the earth). By placing this small satellite in GEO, we can “stare” at the earth, and sense temperature changes. We do not “see” fires, but “measure” temperature changes. This has already been demonstrated on an experimental scale. Fires were seen at close to initiation, and info forwarded to first responders. it were the first to identify the fires 7 out of 8 times. The goal is to have a small independent satellite at GEO orbit focused only on forest fire initiation. Thus, with one small satellite, focused only on forest fire initiation, we hope to greatly decrease the impact to persons, property and the environment.Keywords: space detection, wildfire early warning, demonstration wildfire detection and action from space, space detection to first responders
Procedia PDF Downloads 692161 A Multi-Scale Approach for the Analysis of Fiber-Reinforced Composites
Authors: Azeez Shaik, Amit Salvi, B. P. Gautham
Abstract:
Fiber reinforced polymer resin composite materials are finding wide variety of applications in automotive and aerospace industry because of their high specific stiffness and specific strengths when compared to metals. New class of 2D and 3D textile and woven fabric composites offer excellent fracture toughens as they bridge the cracks formed during fracture. Due to complexity of their fiber architectures and its resulting composite microstructures, optimized design and analysis of these structures is very complicated. A traditional homogenization approach is typically used to analyze structures made up of these materials. This approach usually fails to predict damage initiation as well as damage propagation and ultimate failure of structure made up of woven and textile composites. This study demonstrates a methodology to analyze woven and textile composites by using the multi-level multi-scale modelling approach. In this approach, a geometric repetitive unit cell (RUC) is developed with all its constituents to develop a representative volume element (RVE) with all its constituents and their interaction modeled correctly. The structure is modeled based on the RUC/RVE and analyzed at different length scales with desired levels of fidelity incorporating the damage and failure. The results are passed across (up and down) the scales qualitatively as well as quantitatively from the perspective of material, configuration and architecture.Keywords: cohesive zone, multi-scale modeling, rate dependency, RUC, woven textiles
Procedia PDF Downloads 3582160 Compare Anxiety, Stress, Depression, andAttitude towards Death among Breast CancerPatient Undergoing Mastectomy and Breast-Conserving
Authors: Mitra JahangirRad, Sheida Sodagar, Maryam Bahrami Hidaji
Abstract:
This study was conducted with the aim of comparing anxiety, stress, depression and attitude towards death among patients with breast cancer who have undergone mastectomy or breast-conserving surgery. The study method is causal-comparative. Statistical population was all patients with breast cancer referring to Medical Center of Panjom Azar Hospital in Gorgan or oncologists' offices in this city within eight months. They were selected using purposive sampling. Sample size of this study was 45 patients with breast cancer undergoing mastectomy and 70 patients under breast-conserving surgery. Measurement tools in this study were depression, anxiety, and stress scale (Dass-21) as well as Death Attitude Profile-Revised (DAPR). Results of this study in hypotheses investigation showed that anxiety, stress and depression among patients with breast cancer, undergoing mastectomy or breast-conserving surgery is significantly different. However, their attitudes towards death do not differ. From these findings, it can be concluded that although most patients with breast cancer encounter many psychological problems, patients undergoing mastectomy experience more anxiety, stress and depression relative to patients with breast-conserving surgery and it seems that they need more supportive therapy.Keywords: anxiety, breast cancer, depression, death, mastectomy
Procedia PDF Downloads 4132159 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction
Authors: Bastien Batardière, Joon Kwon
Abstract:
For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.Keywords: convex optimization, variance reduction, adaptive algorithms, loopless
Procedia PDF Downloads 682158 An Energy Efficient Spectrum Shaping Scheme for Substrate Integrated Waveguides Based on Spread Reshaping Code
Authors: Yu Zhao, Rainer Gruenheid, Gerhard Bauch
Abstract:
In the microwave and millimeter-wave transmission region, substrate-integrated waveguide (SIW) is a very promising candidate for the development of circuits and components. It facilitates the transmission at the data rates in excess of 200 Gbit/s. An SIW mimics a rectangular waveguide by approximating the closed sidewalls with a via fence. This structure suppresses the low frequency components and makes the channel of the SIW a bandpass or high pass filter. This channel characteristic impedes the conventional baseband transmission using non-return-to-zero (NRZ) pulse shaping scheme. Therefore, mixers are commonly proposed to be used as carrier modulator and demodulator in order to facilitate a passband transmission. However, carrier modulation is not an energy efficient solution, because modulation and demodulation at high frequencies consume a lot of energy. For the first time to our knowledge, this paper proposes a spectrum shaping scheme of low complexity for the channel of SIW, namely spread reshaping code. It aims at matching the spectrum of the transmit signal to the channel frequency response. It facilitates the transmission through the SIW channel while it avoids using carrier modulation. In some cases, it even does not need equalization. Simulations reveal a good performance of this scheme, such that, as a result, eye opening is achieved without any equalization or modulation for the respective transmission channels.Keywords: bandpass channel, eye-opening, switching frequency, substrate-integrated waveguide, spectrum shaping scheme, spread reshaping code
Procedia PDF Downloads 1592157 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: chromosomes, cropping, genetic algorithm, genes
Procedia PDF Downloads 4262156 Determinants of Firm Financial Performance: An Empirical Investigation in Context of Public Limited Companies
Authors: Syed Hassan Amjad
Abstract:
In today’s competitive environment, in order for a company to exist, it must continually improve its Performance by reducing cost, improving quality and productivity, and easy access to market.The purpose of this thesis is to check the firm financial growth and performance and which type of factors affect the firm financial performance. This paper examines the key determinants of firm financial performance. We will differentiate between financial and non financial drivers of the firm financial performance. For the measurement of the firm financial performance there are many ways but all the measure had been taken in aggregation, such as debt, tax rate, operating expenses, earning per share and economic conditions. This study has also been done in developed countries but these researches show that foreign companies face many difficulties inimproving the firm financial performance. In findings we found that marketing expenditures and international diversification had a positive impact on firm valuation. In research also found that a firm's ownership composition, particularly the level of equity ownership by Domestic Financial Institutions and Dispersed Public Shareholders, and the leverage of the firm, tax rate and economic conditions were important factors affecting its financial performance.Keywords: debt, tax rate, firm financial performance, operating expenses, dividend per share, economic conditions
Procedia PDF Downloads 3412155 Screening and Optimization of Pretreatments for Rice Straw and Their Utilization for Bioethanol Production Using Developed Yeast Strain
Authors: Ganesh Dattatraya Saratale, Min Kyu Oh
Abstract:
Rice straw is one of the most abundant lignocellulosic waste materials and its annual production is about 731 Mt in the world. This study treats the subject of effective utilization of this waste biomass for biofuels production. We have showed a comparative assessment of numerous pretreatment strategies for rice straw, comprising of major physical, chemical and physicochemical methods. Among the different methods employed for pretreatment alkaline pretreatment in combination with sodium chlorite/acetic acid delignification found efficient pretreatment with significant improvement in the enzymatic digestibility of rice straw. A cellulase dose of 20 filter paper units (FPU) released a maximum 63.21 g/L of reducing sugar with 94.45% hydrolysis yield and 64.64% glucose yield from rice straw, respectively. The effects of different pretreatment methods on biomass structure and complexity were investigated by FTIR, XRD and SEM analytical techniques. Finally the enzymatic hydrolysate of rice straw was used for ethanol production using developed Saccharomyces cerevisiae SR8. The developed yeast strain enabled efficient fermentation of xylose and glucose and produced higher ethanol production. Thus development of bioethanol production from lignocellulosic waste biomass is generic, applicable methodology and have great implication for using ‘green raw materials’ and producing ‘green products’ much needed today.Keywords: rice straw, pretreatment, enzymatic hydrolysis, FPU, Saccharomyces cerevisiae SR8, ethanol fermentation
Procedia PDF Downloads 5372154 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1882153 GeoWeb at the Service of Household Waste Collection in Urban Areas
Authors: Abdessalam Hijab, Eric Henry, Hafida Boulekbache
Abstract:
The complexity of the city makes sustainable management of the urban environment more difficult. Managers are required to make significant human and technical investments, particularly in household waste collection (focus of our research). The aim of this communication is to propose a collaborative geographic multi-actor device (MGCD) based on the link between information and communication technologies (ICT) and geo-web tools in order to involve urban residents in household waste collection processes. Our method is based on a collaborative/motivational concept between the city and its residents. It is a geographic collaboration dedicated to the general public (citizens, residents, and any other participant), based on real-time allocation and geographic location of topological, geographic, and multimedia data in the form of local geo-alerts (location-specific problems) related to household waste in an urban environment. This contribution allows us to understand the extent to which residents can assist and contribute to the development of household waste collection processes for a better protected urban environment. This suggestion provides a good idea of how residents can contribute to the data bank for future uses. Moreover, it will contribute to the transformation of the population into a smart inhabitant as an essential component of a smart city. The proposed model will be tested in the Lamkansa sampling district in Casablanca, Morocco.Keywords: information and communication technologies, ICTs, GeoWeb, geo-collaboration, city, inhabitant, waste, collection, environment
Procedia PDF Downloads 1272152 Workforce Optimization: Fair Workload Balance and Near-Optimal Task Execution Order
Authors: Alvaro Javier Ortega
Abstract:
A large number of companies face the challenge of matching highly-skilled professionals to high-end positions by human resource deployment professionals. However, when the professional list and tasks to be matched are larger than a few dozens, this process result is far from optimal and takes a long time to be made. Therefore, an automated assignment algorithm for this workforce management problem is needed. The majority of companies are divided into several sectors or departments, where trained employees with different experience levels deal with a large number of tasks daily. Also, the execution order of all tasks is of mater consequence, due to some of these tasks just can be run it if the result of another task is provided. Thus, a wrong execution order leads to large waiting times between consecutive tasks. The desired goal is, therefore, creating accurate matches and a near-optimal execution order that maximizes the number of tasks performed and minimizes the idle time of the expensive skilled employees. The problem described before can be model as a mixed-integer non-linear programming (MINLP) as it will be shown in detail through this paper. A large number of MINLP algorithms have been proposed in the literature. Here, genetic algorithm solutions are considered and a comparison between two different mutation approaches is presented. The simulated results considering different complexity levels of assignment decisions show the appropriateness of the proposed model.Keywords: employees, genetic algorithm, industry management, workforce
Procedia PDF Downloads 1662151 Analysis of Indoor Air Quality and Sick Building Syndrome in Control Room Oil Gas Refinery
Authors: Dessy Laksyana Utami
Abstract:
The sick building syndrome comprises of various nonspecific symptoms that occur in the occupants of a building. It is commonly increases sickness absenteeism and causes a decrease in productivity of the workers. Evidence suggests that what is called the Sick Building Syndrome are at least three separate entities, which has at least one cause. The following are some of the factors that might be primarily responsible for Sick Building Syndrome such as: Chemical contaminants, Biological contaminants, Inadequate ventilation and Electromagnetic radiation. In many cases it is due to insufficient maintenance of the HVAC (heating, ventilation, air conditioning) system in the building. As this syndrome is increasingly becoming a major occupational hazard. It was used the analytic cross-sectional design. Based on data obtained 80% of respondents reported significant ongoing health problems in the eyes, head, and the nose. 60% had bad symptoms in the throat, the stomach and cough, 50% had gastrointestinal disorders, 40% fatigue and 25% occurred all symptoms sick building syndrome. The 40 respondents were recruited to the study, with a mean age of 35 years (range 20-55). To support the evidence of Sick Building Syndrome, further checks are needed for some of the factors in next research, i.e. measurement of Chemical contaminants, Biological contaminants, inadequate ventilation & Electromagnetic radiation.Keywords: indoor air pollution, sick building syndrome, indoor air quality, oil gas polution
Procedia PDF Downloads 1362150 Technology Transfer and FDI: Some Lessons for Tunisia
Authors: Assaad Ghazouani, Hedia Teraoui
Abstract:
The purpose of this article is to try to see if the FDI actually contributes to technology transfer in Tunisia or are there other sources that can guarantee this transfer? The answer to this problem was gradual as we followed an approach using economic theory, the reality of Tunisia and econometric and statistical tools. We examined the relationship between technology transfer and FDI in Tunisia over a period of 40 years from 1970 to 2010. We estimated in two stages: first, a growth equation, then we have learned from this regression residue (proxy technology), secondly, we regressed on European FDI, exports of manufactures, imports of goods from the European Union in addition to other variables to test the robustness of the results and describing the level of infrastructure in the country. It follows from our study that technology transfer does not originate primarily and exclusively in the FDI and the latter is econometrically weakly with technology transfer and spill over effect of FDI does not seem to occur according to our results. However, the relationship between technology transfer and imports is negative and significant. Although this result is cons-intuitive, is recurrent in the literature of panel data. It has also given rise to intense debate on the microeconomic modelling as well as on the empirical applications. Technology transfer through trade or foreign investment has become a catalyst for growth recognized by numerous empirical studies in particular. However, the relationship technology transfer FDI is more complex than it appears. This complexity is due, primarily, but not exclusively to the close link between FDI and the characteristics of the host country. This is essentially the host's responsibility to establish general conditions, transparent and conducive to investment, and to strengthen human and institutional capacity necessary for foreign capital flows that can have real effects on growth.Keywords: technology transfer, foreign direct investment, economics, finance
Procedia PDF Downloads 3202149 Identification of Hepatocellular Carcinoma Using Supervised Learning Algorithms
Authors: Sagri Sharma
Abstract:
Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms and statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data.Keywords: artificial intelligence, biomarker, gene expression datasets, hepatocellular carcinoma, machine learning, supervised learning algorithms, support vector machine
Procedia PDF Downloads 4282148 Advanced Combinatorial Method for Solving Complex Fault Trees
Authors: José de Jesús Rivero Oliva, Jesús Salomón Llanes, Manuel Perdomo Ojeda, Antonio Torres Valle
Abstract:
Combinatorial explosion is a common problem to both predominant methods for solving fault trees: Minimal Cut Set (MCS) approach and Binary Decision Diagram (BDD). High memory consumption impedes the complete solution of very complex fault trees. Only approximated non-conservative solutions are possible in these cases using truncation or other simplification techniques. The paper proposes a method (CSolv+) for solving complex fault trees, without any possibility of combinatorial explosion. Each individual MCS is immediately discarded after its contribution to the basic events importance measures and the Top gate Upper Bound Probability (TUBP) has been accounted. An estimation of the Top gate Exact Probability (TEP) is also provided. Therefore, running in a computer cluster, CSolv+ will guarantee the complete solution of complex fault trees. It was successfully applied to 40 fault trees from the Aralia fault trees database, performing the evaluation of the top gate probability, the 1000 Significant MCSs (SMCS), and the Fussell-Vesely, RRW and RAW importance measures for all basic events. The high complexity fault tree nus9601 was solved with truncation probabilities from 10-²¹ to 10-²⁷ just to limit the execution time. The solution corresponding to 10-²⁷ evaluated 3.530.592.796 MCSs in 3 hours and 15 minutes.Keywords: system reliability analysis, probabilistic risk assessment, fault tree analysis, basic events importance measures
Procedia PDF Downloads 452147 Entrepreneurial Leadership in Malaysian Public University: Competency and Behavior in the Face of Institutional Adversity
Authors: Noorlizawati Abd Rahim, Zainai Mohamed, Zaidatun Tasir, Astuty Amrin, Haliyana Khalid, Nina Diana Nawi
Abstract:
Entrepreneurial leaders have been sought as in-demand talents to lead profit-driven organizations during turbulent and unprecedented times. However, research regarding the pertinence of their roles in the public sector has been limited. This paper examined the characteristics of the challenging experiences encountered by senior leaders in public universities that require them to embrace entrepreneurialism in their leadership. Through a focus group interview with five Malaysian university top senior leaders with experience being Vice-Chancellor, we explored and developed a framework of institutional adversity characteristics and exemplary entrepreneurial leadership competency in the face of adversity. Complexity of diverse stakeholders, multiplicity of academic disciplines, unfamiliarity to lead different and broader roles, leading new directions, and creating change in high velocity and uncertain environment are among the dimensions that characterise institutional adversities. Our findings revealed that learning agility, opportunity recognition capacity, and bridging capability are among the characteristics of entrepreneurial university leaders. The findings reinforced that the presence of specific attributes in institutional adversity and experiences in overcoming those challenges may contribute to the development of entrepreneurial leadership capabilities.Keywords: bridging capability, entrepreneurial leadership, leadership development, learning agility, opportunity recognition, university leaders
Procedia PDF Downloads 1102146 Coating of Polyelectrolyte Multilayer Thin Films on Poly(S/EGDMA) HIPE Loaded with Hydroxyapatite as a Scaffold for Tissue Engineering Application
Authors: Kornkanok Noulta, Pornsri Pakeyangkoon, Stephen T. Dubas, Pomthong Malakul, Manit Nithithanakul
Abstract:
In recent years, interest in the development of material for tissue engineering application has increased considerably. Poly(High Internal Phase Emulsion) (PolyHIPE) foam is a material that is good candidate for used in tissue engineering application due to its 3D structure and highly porous with interconnected pore. The PolyHIPE was prepared from poly (styrene/ethylene glycol dimethacrylate) through high internal phase emulsion polymerization technique and loaded with hydroxyapatite (HA) to improve biocompatibility. To further increase hydrophilicity of the obtained polyHIPE, layer-by-layer polyelectrolyte multilayers (PEM) technique was used. A surface property of polyHIPE was characterized by contact angle measurement. Morphology and pore size was observed by scanning electron microscope (SEM). The cell viability was revealed by the 3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) assay technique.Keywords: polyelectrolyte multilayer thin film, high internal phase emulsion, polyhipe foam, scaffold, tissue engineering
Procedia PDF Downloads 3502145 Data-Driven Performance Evaluation of Surgical Doctors Based on Fuzzy Analytic Hierarchy Processes
Authors: Yuguang Gao, Qiang Yang, Yanpeng Zhang, Mingtao Deng
Abstract:
To enhance the safety, quality and efficiency of healthcare services provided by surgical doctors, we propose a comprehensive approach to the performance evaluation of individual doctors by incorporating insights from performance data as well as views of different stakeholders in the hospital. Exploratory factor analysis was first performed on collective multidimensional performance data of surgical doctors, where key factors were extracted that encompass assessment of professional experience and service performance. A two-level indicator system was then constructed, for which we developed a weighted interval-valued spherical fuzzy analytic hierarchy process to analyze the relative importance of the indicators while handling subjectivity and disparity in the decision-making of multiple parties involved. Our analytical results reveal that, for the key factors identified as instrumental for evaluating surgical doctors’ performance, the overall importance of clinical workload and complexity of service are valued more than capacity of service and professional experience, while the efficiency of resource consumption ranks comparatively the lowest in importance. We also provide a retrospective case study to illustrate the effectiveness and robustness of our quantitative evaluation model by assigning meaningful performance ratings to individual doctors based on the weights developed through our approach.Keywords: analytic hierarchy processes, factor analysis, fuzzy logic, performance evaluation
Procedia PDF Downloads 562144 Learning Activities in Teaching Nihon-Go in the Philippines: Basis for a Proposed Action Plan
Authors: Esperanza C. Santos
Abstract:
Japanese Language was traditionally considered as a means of imparting culture and training aesthetic experience in students and therefore as something beyond the practical aims of language teaching and learning. Due to the complexity of foreign languages, lots of language learners and teachers shared deep reservations about the potentials of foreign language in enhancing the communication skills of the students. In spite of the arguments against the use of Foreign Language (Nihon-go) in the classroom, the researcher strongly support the use of Nihon-go in teaching communication skills as the researcher believes that Nihon-go is a valuable resource to be exploited in the classroom in order to help the students explore the language in an interesting and challenging way. The focus of this research is to find out the relationship between the preferences, opinions, and perceptions with the communication skills. This study also identifies the significance of the relationship between preferences, opinions and perceptions and communications skills in the activities employed in Foreign language (Nihon-go) among the junior and senior students in Foreign Language 2 at the Imus Institute, Imus Cavite during the academic year 2013-2014. The results of the study are expected to encourage further studies that particularly focused on the communication skills as brought about by the identified factors namely: preferences, opinions, and perceptions on the benefits factor namely the language acquisition; access to Japanese culture and students' interpretative ability. Therefore, this research is in its quest for the issues and concerns on how to effectively teach different learning activities in a Nihon-go class.Keywords: preferences, opinions, perceptions, language acquisition
Procedia PDF Downloads 3082143 Performance Measurement of Logistics Systems for Thailand's Wholesales and Retails Industries by Data Envelopment Analysis
Authors: Pornpimol Chaiwuttisak
Abstract:
The study aims to compare the performance of the logistics for Thailand’s wholesale and retail trade industries (except motor vehicles, motorcycle, and stalls) by using data (data envelopment analysis). Thailand Standard Industrial Classification in 2009 (TSIC - 2009) categories that industries into sub-group no. 45: wholesale and retail trade (except for the repair of motor vehicles and motorcycles), sub-group no. 46: wholesale trade (except motor vehicles and motorcycles), and sub-group no. 47: retail trade (except motor vehicles and motorcycles. Data used in the study is collected by the National Statistical Office, Thailand. The study consisted of four input factors include the number of companies, the number of personnel in logistics, the training cost in logistics, and outsourcing logistics management. Output factor includes the percentage of enterprises having inventory management. The results showed that the average relative efficiency of small-sized enterprises equals to 27.87 percent and 49.68 percent for the medium-sized enterprises.Keywords: DEA, wholesales and retails, logistics, Thailand
Procedia PDF Downloads 4142142 In-Process Integration of Resistance-Based, Fiber Sensors during the Braiding Process for Strain Monitoring of Carbon Fiber Reinforced Composite Materials
Authors: Oscar Bareiro, Johannes Sackmann, Thomas Gries
Abstract:
Carbon fiber reinforced polymer composites (CFRP) are used in a wide variety of applications due to its advantageous properties and design versatility. The braiding process enables the manufacture of components with good toughness and fatigue strength. However, failure mechanisms of CFRPs are complex and still present challenges associated with their maintenance and repair. Within the broad scope of structural health monitoring (SHM), strain monitoring can be applied to composite materials to improve reliability, reduce maintenance costs and safely exhaust service life. Traditional SHM systems employ e.g. fiber optics, piezoelectrics as sensors, which are often expensive, time consuming and complicated to implement. A cost-efficient alternative can be the exploitation of the conductive properties of fiber-based sensors such as carbon, copper, or constantan - a copper-nickel alloy – that can be utilized as sensors within composite structures to achieve strain monitoring. This allows the structure to provide feedback via electrical signals to a user which are essential for evaluating the structural condition of the structure. This work presents a strategy for the in-process integration of resistance-based sensors (Elektrisola Feindraht AG, CuNi23Mn, Ø = 0.05 mm) into textile preforms during its manufacture via the braiding process (Herzog RF-64/120) to achieve strain monitoring of braided composites. For this, flat samples of instrumented composite laminates of carbon fibers (Toho Tenax HTS40 F13 24K, 1600 tex) and epoxy resin (Epikote RIMR 426) were manufactured via vacuum-assisted resin infusion. These flat samples were later cut out into test specimens and the integrated sensors were wired to the measurement equipment (National Instruments, VB-8012) for data acquisition during the execution of mechanical tests. Quasi-static tests were performed (tensile, 3-point bending tests) following standard protocols (DIN EN ISO 527-1 & 4, DIN EN ISO 14132); additionally, dynamic tensile tests were executed. These tests were executed to assess the sensor response under different loading conditions and to evaluate the influence of the sensor presence on the mechanical properties of the material. Several orientations of the sensor with regards to the applied loading and sensor placements inside the laminate were tested. Strain measurements from the integrated sensors were made by programming a data acquisition code (LabView) written for the measurement equipment. Strain measurements from the integrated sensors were then correlated to the strain/stress state for the tested samples. From the assessment of the sensor integration approach it can be concluded that it allows for a seamless sensor integration into the textile preform. No damage to the sensor or negative effect on its electrical properties was detected during inspection after integration. From the assessment of the mechanical tests of instrumented samples it can be concluded that the presence of the sensors does not alter significantly the mechanical properties of the material. It was found that there is a good correlation between resistance measurements from the integrated sensors and the applied strain. It can be concluded that the correlation is of sufficient accuracy to determinate the strain state of a composite laminate based solely on the resistance measurements from the integrated sensors.Keywords: braiding process, in-process sensor integration, instrumented composite material, resistance-based sensor, strain monitoring
Procedia PDF Downloads 1042141 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 3852140 Prediction of Boundary Shear Stress with Gradually Tapering Flood Plains
Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua
Abstract:
River is the main source of water. It is a form of natural open channel which gives rise to many complex phenomenon of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress and depth averaged velocity. The development of society more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. During floods, part of a river is carried by the simple main channel and rest is carried by flood plains. For such compound asymmetric channels, the flow structure becomes complicated due to momentum exchange between main channel and adjoining flood plains. Distribution of boundary shear in subsections provides us with the concept of momentum transfer between the interface of main channel and the flood plains. Experimentally, to get better data with accurate results are very complex because of the complexity of the problem. Hence, Conveyance Estimation System (CES) software has been used to tackle the complex processes to determine the shear stresses at different sections of an open channel having asymmetric flood plains on both sides of the main channel and the results are compared with the symmetric flood plains for various geometrical shapes and flow conditions. Error analysis is also performed to know the degree of accuracy of the model implemented.Keywords: depth average velocity, non prismatic compound channel, relative flow depth , velocity distribution
Procedia PDF Downloads 1212139 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 2252138 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine
Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li
Abstract:
Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.Keywords: machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation
Procedia PDF Downloads 234