Search results for: target firm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3400

Search results for: target firm

3040 Patronage Network and Ideological Manipulations in Translation of Literary Texts: A Case Study of George Orwell's “1984” in Persian Translation in the Period 1980 to 2015

Authors: Masoud Hassanzade Novin, Bahloul Salmani

Abstract:

The process of the translation is not merely the linguistic aspects. It is also considered in the cultural framework of both the source and target text cultures. The translation process and translated texts are confronted the new aspect in 20th century which is considered mostly in the patronage framework and ideological grillwork of the target language. To have these factors scrutinized in the process of the translation both micro-element factors and macro-element factors can be taken into consideration. For the purpose of this study through a qualitative type of research based on critical discourse analysis approach, the case study of the novel “1984” written by George Orwell was chosen as the corpus of the study to have the contrastive analysis by its Persian translated texts. Results of the study revealed some distortions embedded in the target texts which were overshadowed by ideological aspect and patronage network. The outcomes of the manipulated terms were different in various categories which revealed the manipulation aspects in the texts translated.

Keywords: critical discourse analysis, ideology, patronage network, translated texts

Procedia PDF Downloads 322
3039 The Influence of Knowledge Spillovers on High-Impact Firm Growth: A Comparison of Indigenous and Foreign Firms

Authors: Yazid Abdullahi Abubakar, Jay Mitra

Abstract:

This paper is concerned with entrepreneurial high-impact firms, which are firms that generate ‘both’ disproportionate levels of employment and sales growth, and have high levels of innovative activity. It investigates differences in factors influencing high-impact growth between indigenous and foreign firms. The study is based on an analysis of data from United Kingdom (UK) Innovation Scoreboard on 865 firms, which were divided into high-impact firms (those achieving positive growth in both sales and employment) and low-impact firms (negative or no growth in sales or employment); in order to identifying the critical differences in regional, sectorial and size related factors that facilitate knowledge spillovers and high-impact growth between indigenous and foreign firms. The findings suggest that: 1) Firms’ access to regional knowledge spillovers (from businesses and higher education institutions) is more significantly associated with high-impact growth of UK firms in comparison to foreign firms, 2) Because high-tech sectors have greater use of knowledge spillovers (compared to low-tech sectors), high-tech sectors are more associated with high-impact growth, but the relationship is stronger for UK firms compared to foreign firms, 3) Because small firms have greater need for knowledge spillovers (relative to large firms), there is a negative relationship between firm size and high-impact growth, but the negative relationship is greater for UK firms in comparison to foreign firms.

Keywords: entrepreneurship, high-growth, indigenous firms, foreign firms, small firms, large firms

Procedia PDF Downloads 429
3038 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: electrical impedance tomography, EIT, surgeon robot, image processing of electrical impedance tomography

Procedia PDF Downloads 272
3037 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 278
3036 Genetic Algorithms Multi-Objective Model for Project Scheduling

Authors: Elsheikh Asser

Abstract:

Time and cost are the main goals of the construction project management. The first schedule developed may not be a suitable schedule for beginning or completing the project to achieve the target completion time at a minimum total cost. In general, there are trade-offs between time and cost (TCT) to complete the activities of a project. This research presents genetic algorithms (GAs) multi-objective model for project scheduling considering different scenarios such as least cost, least time, and target time.

Keywords: genetic algorithms, time-cost trade-off, multi-objective model, project scheduling

Procedia PDF Downloads 413
3035 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 583
3034 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival

Authors: Ichiro Takahashi

Abstract:

One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.

Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability

Procedia PDF Downloads 162
3033 Enzyme Inhibition Activity of Schiff Bases Against Mycobacterium Tuberculosis Using Molecular Docking

Authors: Imran Muhammad

Abstract:

The main cause of infectious disease in the modern world is Mycobacterium Tuberculosis (MT). To combat tuberculosis, new and efficient drugs are an urgent need in the modern world. Schif bases are potent for their biological pharmacophore activity. Thus we selected different Vanillin-based Schiff bases for their binding activity against target enzymes of Mycobacterium tuberculosis that is (DprE1 (decaprenyl phosphoryl-β-D-ribose 2′-epimerase), and DNA gyrase subunit-A), using molecular docking. We evaluate the inhibition potential, interaction, and binding mode of these compounds with the target enzymes.

Keywords: schiff bases, tuberculosis, DNA gyrase, DprE1, docking

Procedia PDF Downloads 75
3032 Calculation of Lungs Physiological Lung Motion in External Lung Irradiation

Authors: Yousif Mohamed Y. Abdallah, Khalid H. Eltom

Abstract:

This is an experimental study deals with measurement of the periodic physiological organ motion during lung external irradiation in order to reduce the exposure of healthy tissue during radiation treatments. The results showed for left lung displacement reading (4.52+1.99 mm) and right lung is (8.21+3.77 mm) which the radiotherapy physician should take suitable countermeasures in case of significant errors. The motion ranged between 2.13 mm and 12.2 mm (low and high). In conclusion, the calculation of tumour mobility can improve the accuracy of target areas definition in patients undergo Sterostatic RT for stage I, II and III lung cancer (NSCLC). Definition of the target volume based on a high resolution CT scan with a margin of 3-5 mm is appropriate.

Keywords: physiological motion, lung, external irradiation, radiation medicine

Procedia PDF Downloads 418
3031 Detectability Analysis of Typical Aerial Targets from Space-Based Platforms

Authors: Yin Zhang, Kai Qiao, Xiyang Zhi, Jinnan Gong, Jianming Hu

Abstract:

In order to achieve effective detection of aerial targets over long distances from space-based platforms, the mechanism of interaction between the radiation characteristics of the aerial targets and the complex scene environment including the sunlight conditions, underlying surfaces and the atmosphere are analyzed. A large simulated database of space-based radiance images is constructed considering several typical aerial targets, target working modes (flight velocity and altitude), illumination and observation angles, background types (cloud, ocean, and urban areas) and sensor spectrums ranging from visible to thermal infrared. The target detectability is characterized by the signal-to-clutter ratio (SCR) extracted from the images. The influence laws of the target detectability are discussed under different detection bands and instantaneous fields of view (IFOV). Furthermore, the optimal center wavelengths and widths of the detection bands are suggested, and the minimum IFOV requirements are proposed. The research can provide theoretical support and scientific guidance for the design of space-based detection systems and on-board information processing algorithms.

Keywords: space-based detection, aerial targets, detectability analysis, scene environment

Procedia PDF Downloads 144
3030 The Pink Elephant: Women who Bully Other Women in the Workplace

Authors: Berri A. Wells

Abstract:

The purpose of this study is to explore the different variables that influence women, specifically Black American or African American women to target and bully other Black American women in the workplace. The Pink Elephant Study seeks to answer the research question, what are some of the factors that prompt Black women to target and harass other Black women in the workplace or other professional settings and organizations? The goal of the study is to enhance the workplace bullying body of knowledge in two specific ways beginning with the inclusion of Black women in the conversation of workplace bullying. A second goal is to hear from and learn from perpetrators of workplace bullying.

Keywords: workplace bullying, incivility at work, women at work, overcoming conflict

Procedia PDF Downloads 119
3029 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 108
3028 Corporate Governance and Disclosure Quality: Taxonomy of Tunisian Listed Firms Using the Decision Tree Method Based Approach

Authors: Wided Khiari, Adel Karaa

Abstract:

This study aims to establish a typology of Tunisian listed firms according to their corporate governance characteristics and disclosure quality. The paper uses disclosed scores to examine corporate governance practices of Tunisian listed firms. A content analysis of 46 Tunisian listed firms from 2001 to 2010 has been carried out and a disclosure index developed to determine the level of disclosure of the companies. The disclosure quality is appreciated through the quantity and also through the nature (type) of information disclosed. Applying the decision tree method, the obtained tree diagrams provide ways to know the characteristics of a particular firm regardless of its level of disclosure. Obtained results show that the characteristics of corporate governance to achieve good quality of disclosure are not unique for all firms. These structures are not necessarily all of the recommendations of best practices, but converge towards the best combination. Indeed, in practice, there are companies which have a good quality of disclosure, but are not well-governed. However, we hope that by improving their governance system their level of disclosure may be better. These findings show, in a general way, a convergence towards the standards of corporate governance with a few exceptions related to the specificity of Tunisian listed firms and show the need for the adoption of a code for each context. These findings shed the light on corporate governance features that enhance incentives for good disclosure. It allows identifying, for each firm and in any date, corporate governance determinants of disclosure quality. More specifically, and all being equal, obtained tree makes a rule of decision for the company to know the level of disclosure based on certain characteristics of the governance strategy adopted by the latter.

Keywords: corporate governance, disclosure, decision tree, economics

Procedia PDF Downloads 335
3027 The Implementation of Entrepreneurial Marketing in Small Business Enterprise

Authors: Iin Mayasari

Abstract:

This study aims at exploring the influence of aspects of entrepreneurial marketing on a firm’s performance. Entrepreneurs are not only supported by resources control to obtain sustainable competitive advantage, but it should also be supported by intangible resources. Entrepreneurial marketing provides the opportunity for entrepreneurs to proactively find better ways to create value for desired customers, to create innovation, and to build customer equity. Entrepreneurial marketing has the medium between entrepreneurship and marketing, and serves as an umbrella for many of the emergent perspectives on marketing. It has eight underlying dimensions. They are proactiveness, calculated risk-taking, innovativeness, an opportunity focus, entrepreneurial orientation, resource leveraging, customer intensity, and value creating. The research method of the study was a qualitative study by having an interview with 8 small companies in Kudus Region, the Central Java, Indonesia. The interviewees were the owner and the manager of the company that had the scope work of small business enterprise in wood crafting industry. The interview was related to the implementation of the elements of the entrepreneurial marketing. The result showed that the small business enterprises had implemented the elements of entrepreneurial marketing in supporting their daily activities. The understanding based on the theoretical implementation was well executed by the owner and managers. The problems in managing small business enterprises were related to the full support by the government and the branding management. Furthermore, the innovation process should be improved especially the use of internet to promote the product, to expand the market and to increase the firm’s performance.

Keywords: entrepreneurial marketing, innovativeness, risk taking, opportunity focus

Procedia PDF Downloads 298
3026 Reliability Based Performance Evaluation of Stone Column Improved Soft Ground

Authors: A. GuhaRay, C. V. S. P. Kiranmayi, S. Rudraraju

Abstract:

The present study considers the effect of variation of different geotechnical random variables in the design of stone column-foundation systems for assessing the bearing capacity and consolidation settlement of highly compressible soil. The soil and stone column properties, spacing, diameter and arrangement of stone columns are considered as the random variables. Probability of failure (Pf) is computed for a target degree of consolidation and a target safe load by Monte Carlo Simulation (MCS). The study shows that the variation in coefficient of radial consolidation (cr) and cohesion of soil (cs) are two most important factors influencing Pf. If the coefficient of variation (COV) of cr exceeds 20%, Pf exceeds 0.001, which is unsafe following the guidelines of US Army Corps of Engineers. The bearing capacity also exceeds its safe value for COV of cs > 30%. It is also observed that as the spacing between the stone column increases, the probability of reaching a target degree of consolidation decreases. Accordingly, design guidelines, considering both consolidation and bearing capacity of improved ground, are proposed for different spacing and diameter of stone columns and geotechnical random variables.

Keywords: bearing capacity, consolidation, geotechnical random variables, probability of failure, stone columns

Procedia PDF Downloads 359
3025 Finite Element Modeling of Ultrasonic Shot Peening Process using Multiple Pin Impacts

Authors: Chao-xun Liu, Shi-hong Lu

Abstract:

In spite of its importance to the aerospace and automobile industries, little or no attention has been devoted to the accurate modeling of the ultrasonic shot peening (USP) process. It is therefore the purpose of this study to conduct finite element analysis of the process using a realistic multiple pin impacts model with the explicit solver of ABAQUS. In this paper, we research the effect of several key parameters on the residual stress distribution within the target, including impact velocity, incident angle, friction coefficient between pins and target and impact number of times were investigated. The results reveal that the impact velocity and impact number of times have obvious effect and impacting vertically could produce the most perfect residual stress distribution. Then we compare the results with the date in USP experiment and verify the exactness of the model. The analysis of the multiple pin impacts date reveal the relationships between peening process parameters and peening quality, which are useful for identifying the parameters which need to be controlled and regulated in order to produce a more beneficial compressive residual stress distribution within the target.

Keywords: ultrasonic shot peening, finite element, multiple pins, residual stress, numerical simulation

Procedia PDF Downloads 448
3024 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms

Authors: Gaurav Gupta, Jitendra Mahakud

Abstract:

This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.

Keywords: cash flow, corporate investment, financing constraints, panel data method

Procedia PDF Downloads 241
3023 A Scenario-Based Experiment Comparing Managerial and Front-Line Employee Apologies in Terms of Customers' Perceived Justice, Satisfaction, and Commitment

Authors: Ioana Dallinger, Vincent P. Magnini

Abstract:

Due to the many moving parts and high human component, mistakes and failures sometimes occur during transactions in service environments. Because a certain portion of such failures is unavoidable, many service providers constantly look for guidance regarding optimal ways by which they should manage failures and recoveries. Through the use of a scenario-based experiment, the findings of this study run counter to the empowerment approach (i.e. that frontline employees should be empowered to resolve failure situations on their own doing). Specifically, this study finds that customers’ perceptions of distributive, procedural, and interactional justice are significantly higher [p-values < .05] when a manager delivers an apology as opposed to the frontline provider. Moreover, customers’ satisfaction with the recovery and commitment to the firm are also significantly stronger [p-values < .05] when a manager apologizes. Interestingly, this study also empirically tests the effects of combined apologies of both the manager and employee and finds that the combined approach yields better results for customers’ interactional justice perceptions and for their satisfaction with recovery, but not for their distributive or procedural justice perceptions or consequent commitment to the firm. This study can serve a springboard for further research. For example, perceptions and attitudes regarding employee empowerment vary based upon country culture. Furthermore, there are likely a number of factors that can moderate the cause and effect relationship between a failure recovery and customers’ post-recovery perceptions [e.g. the severity of the failure].

Keywords: apology, empowerment, service failure recovery, service recovery

Procedia PDF Downloads 296
3022 Knowledge Management for Competitiveness and Performances in Higher Educational Institutes

Authors: Jeyarajan Sivapathasundram

Abstract:

Knowledge management has been recognised as an emerging factor for being competitive among institutions and performances in firms. As such, being recognised as knowledge rich institution, higher education institutes have to be recognised knowledge management based resources for achieving competitive advantages. Present research picked result out of postgraduate research conducted in knowledge management at non-state higher educational institutes of Sri Lanka. Besides, the present research aimed to discover knowledge management for competition and firm performances of higher educational institutes out of the result produced by the postgraduate study. Besides, the results are found in a pair that developed out of knowledge management practices and the reason behind the existence of the practices. As such, the present research has developed a filter to pick the pairs that satisfy its condition of competition and performance of the firm. As such, the pair, such as benchmarking is practised to be ethically competing through conducting courses. As the postgraduate research tested results of foreign researches in a qualitative paradigm, the finding of the present research are generalise fact for knowledge management for competitiveness and performances in higher educational institutes. Further, the presented research method used attributes which explain competition and performance in its filter to discover the pairs relevant to competition and performances. As such, the fact in regards to knowledge management for competition and performances in higher educational institutes are presented in the publication that the presentation is out of the generalised result. Therefore, knowledge management for competition and performance in higher educational institutes are generalised.

Keywords: competition in and among higher educational institutes, performances of higher educational institutes, noun based filtering, production out of generalisation of a research

Procedia PDF Downloads 137
3021 Cultural Aspect Representation: An Analysis of EFL Textbook Grade 10 Years 2017 in Indonesia

Authors: Soni Ariawan

Abstract:

The discourse of language and culture relation is an interesting issue to be researched. The debate is not about what comes first, language or culture, but it strongly argues that learning foreign language also means learning the culture of the language. The more interesting issue found once constructing an EFL textbook dealing with proportional representation among source culture, target culture and international culture. This study investigates cultural content representation in EFL textbook grade 10 year 2017 in Indonesia. Cortazzi and Jin’s theoretical framework is employed to analyse the reading texts, conversations, and images. The finding shows that national character as the main agenda of Indonesian government is revealed in this textbook since the textbook more frequently highlights the source culture (Indonesian culture) compared to target and international culture. This is aligned with the aim of Indonesian government to strengthen the national identity and promoting local culture awareness through education. To conclude, the study is expected to be significant in providing the idea for government to consider cultural balances representation in constructing textbook. Furthermore, teachers and students should be aware of cultural content revealed in the EFL textbook and be able to enhance intercultural communication not only in the classroom but also in a wider society.

Keywords: EFL textbook, intercultural communication, local culture, target culture, international culture

Procedia PDF Downloads 220
3020 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 354
3019 Managerial Advice-Seeking and Supply Chain Resilience: A Social Capital Perspective

Authors: Ethan Nikookar, Yalda Boroushaki, Larissa Statsenko, Jorge Ochoa Paniagua

Abstract:

Given the serious impact that supply chain disruptions can have on a firm's bottom-line performance, both industry and academia are interested in supply chain resilience, a capability of the supply chain that enables it to cope with disruptions. To date, much of the research has focused on the antecedents of supply chain resilience. This line of research has suggested various firm-level capabilities that are associated with greater supply chain resilience. A consensus has emerged among researchers that supply chain flexibility holds the greatest potential to create resilience. Supply chain flexibility achieves resilience by creating readiness to respond to disruptions with little cost and time by means of reconfiguring supply chain resources to mitigate the impacts of the disruption. Decisions related to supply chain disruptions are made by supply chain managers; however, the role played by supply chain managers' reference networks has been overlooked in the supply chain resilience literature. This study aims to understand the impact of supply chain managers on their firms' supply chain resilience. Drawing on social capital theory and social network theory, this paper proposes a conceptual model to explore the role of supply chain managers in developing the resilience of supply chains. Our model posits that higher level of supply chain managers' embeddedness in their reference network is associated with increased resilience of their firms' supply chain. A reference network includes individuals from whom supply chain managers seek advice on supply chain related matters. The relationships between supply chain managers' embeddedness in reference network and supply chain resilience are mediated by supply chain flexibility.

Keywords: supply chain resilience, embeddedness, reference networks, social capitals

Procedia PDF Downloads 228
3018 Corporate Voluntary Greenhouse Gas Emission Reporting in United Kingdom: Insights from Institutional and Upper Echelons Theories

Authors: Lyton Chithambo

Abstract:

This paper reports the results of an investigation into the extent to which various stakeholder pressures influence voluntary disclosure of greenhouse-gas (GHG) emissions in the United Kingdom (UK). The study, which is grounded on institutional theory, also borrows from the insights of upper echelons theory and examines whether specific managerial (chief executive officer) characteristics explain and moderates various stakeholder pressures in explaining GHG voluntary disclosure. Data were obtained from the 2011 annual and sustainability reports of a sample of 216 UK companies on the FTSE350 index listed on the London Stock Exchange. Generally the results suggest that there is no substantial shareholder and employee pressure on a firm to disclose GHG information but there is significant positive pressure from the market status of a firm with those firms with more market share disclosing more GHG information. Consistent with the predictions of institutional theory, we found evidence that coercive pressure i.e. regulatory pressure and mimetic pressures emanating in some industries notably industrials and consumer services have a significant positive influence on firms’ GHG disclosure decisions. Besides, creditor pressure also had a significant negative relationship with GHG disclosure. While CEO age had a direct negative effect on GHG voluntary disclosure, its moderation effect on stakeholder pressure influence on GHG disclosure was only significant on regulatory pressure. The results have important implications for both policy makers and company boards strategizing to reign in their GHG emissions.

Keywords: greenhouse gases, voluntary disclosure, upper echelons theory, institution theory

Procedia PDF Downloads 233
3017 Angle of Arrival Estimation Using Maximum Likelihood Method

Authors: Olomon Wu, Hung Lu, Nick Wilkins, Daniel Kerr, Zekeriya Aliyazicioglu, H. K. Hwang

Abstract:

Multiple Input Multiple Output (MIMO) radar has received increasing attention in recent years. MIMO radar has many advantages over conventional phased array radar such as target detection, resolution enhancement, and interference suppression. In this paper, the results are presented from a simulation study of MIMO Uniformly-Spaced Linear Array (ULA) antennas. The performance is investigated under varied parameters, including varied array size, Pseudo Random (PN) sequence length, number of snapshots, and Signal to Noise Ratio (SNR). The results of MIMO are compared to a traditional array antenna.

Keywords: MIMO radar, phased array antenna, target detection, radar signal processing

Procedia PDF Downloads 542
3016 A Research Analysis on the Source Technology and Convergence Types

Authors: Kwounghee Choi

Abstract:

Technological convergence between the various sectors is expected to have a very large impact on future industrial and economy. This study attempts to do empirical approach between specific technologies’ classification. For technological convergence classification, it is necessary to set the target technology to be analyzed. This study selected target technology from national research and development plan. At first we found a source technology for analysis. Depending on the weight of source technology, NT-based, BT-based, IT-based, ET-based, CS-based convergence types were classified. This study aims to empirically show the concept of convergence technology and convergence types. If we use the source technology to classify convergence type, it will be useful to make practical strategies of convergence technology.

Keywords: technology convergence, source technology, convergence type, R&D strategy, technology classification

Procedia PDF Downloads 485
3015 In-Vivo Association of Multivalent 11 Zinc Fingers Transcriptional Factors CTCF and Boris to YB-1 in Multiforme Glioma-RGBM Cell Line

Authors: Daruliza Kernain, Shaharum Shamsuddin, See Too Wei Cun

Abstract:

CTCF is a unique, highly conserved and ubiquitously expressed 11 zinc finger (ZF) transcriptional factor with multiple target sites. It is able to bind to various target sequences to perform different regulatory roles including promoter activation or repression, creating hormone-responsive gene silencing element, and functional block of enhancer-promoter interactions. The binding of CTCF to the essential binding site is through the combination of different ZF domain. On the other hand, BORIS for brother of the regulator of imprinted sites, which expressed only in the testis and certain cancer cell line is homology to CTCF 11 ZF domains. Since both transcriptional factors share the same ZF domains hence there is a possibility for both to bind to the same target sequences. In this study, the interaction of these two proteins to multi-functional Y-box DNA/RNA-binding factor, YB-1 was determined. The protein-protein interaction between CTCF/YB-1 and BORIS/YB-1 were discovered by Co-immuno-precipitation (CO-IP) technique through reciprocal experiment from RGBM total cell lysate. The results showed that both CTCF and BORIS were able to interact with YB-1 in Glioma RGBM cell line. To the best of our knowledge, this is the first findings demonstrating the ability of BORIS and YB-1 to form a complex in vivo.

Keywords: immunoprecipitation, CTCF/BORIS/YB-1, transcription factor, molecular medicine

Procedia PDF Downloads 266
3014 Estimation of Optimum Parameters of Non-Linear Muskingum Model of Routing Using Imperialist Competition Algorithm (ICA)

Authors: Davood Rajabi, Mojgan Yazdani

Abstract:

Non-linear Muskingum model is an efficient method for flood routing, however, the efficiency of this method is influenced by three applied parameters. Therefore, efficiency assessment of Imperialist Competition Algorithm (ICA) to evaluate optimum parameters of non-linear Muskingum model was addressed through this study. In addition to ICA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) were also used aiming at an available criterion to verdict ICA. In this regard, ICA was applied for Wilson flood routing; then, routing of two flood events of DoAab Samsami River was investigated. In case of Wilson flood that the target function was considered as the sum of squared deviation (SSQ) of observed and calculated discharges. Routing two other floods, in addition to SSQ, another target function was also considered as the sum of absolute deviations of observed and calculated discharge. For the first floodwater based on SSQ, GA indicated the best performance, however, ICA was on first place, based on SAD. For the second floodwater, based on both target functions, ICA indicated a better operation. According to the obtained results, it can be said that ICA could be used as an appropriate method to evaluate the parameters of Muskingum non-linear model.

Keywords: Doab Samsami river, genetic algorithm, imperialist competition algorithm, meta-exploratory algorithms, particle swarm optimization, Wilson flood

Procedia PDF Downloads 505
3013 A Study of Surface of Titanium Targets for Neutron Generators

Authors: Alexey Yu. Postnikov, Nikolay T. Kazakovskiy, Valery V. Mokrushin, Irina A. Tsareva, Andrey A. Potekhin, Valentina N. Golubeva, Yuliya V. Potekhina, Maxim V. Tsarev

Abstract:

The development of tritium and deuterium targets for neutron tubes and generators is a part of the activities in All-Russia Research Institute of Experimental Physics (RFNC-VNIIEF). These items contain a metal substrate (for example, copper) with a titanium film with a few microns thickness deposited on it. Then these metal films are saturated with tritium, deuterium or their mixtures. The significant problem in neutron tubes and neutron generators is the characterization of substrate surface before a deposition of titanium film on it, and analysis of the deposited titanium film’s surface before hydrogenation and after a saturation of the film with hydrogen isotopes. The performance effectiveness of neutron tube and generator also depends on upon the quality parameters of the surface of the initial substrate, deposited metal film and hydrogenated target. The objective of our work is to study the target prototype samples, that have differ by various approaches to the preliminary chemical processing of a copper substrate, and to analyze the integrity of titanium film after its saturation with deuterium. The research results of copper substrate and the surface of deposited titanium film with the use of electron microscopy, X-ray spectral microanalysis and laser-spark methods of analyses are presented. The causes of surface defects appearance have been identified. The distribution of deuterium and some impurities (oxygen and nitrogen) along the surface and across the height of the hydrogenated film in the target has been established. This allows us to evaluate the composition homogeneity of the samples and consequently to estimate the quality of hydrogenated samples. As the result of this work the propositions on the advancement of production technology and characterization of target’s surface have been presented.

Keywords: tritium and deuterium targets, titanium film, laser-spark methods, electron microscopy

Procedia PDF Downloads 442
3012 Governance and Financial Constraints the Impact on Corporate Social Responsibility Implementation in Cooperatives

Authors: Wanlapha Phraibueng, Patrick Sentis, Geraldine Riviere-Giordano

Abstract:

Corporate Social Responsibility (CSR) initiatives have been widely discussed especially in investor-oriented firms. In contrast, cooperatives pay less attention to CSR because their activities have integrated the responsibility and the solidity of social, economic and environment. On the other hand, by adopting ownership theory and agency theory – cooperatives ignore CSR investment due to unclarified decision control in the governance and the limitation to acquire the capital financed. The unique governance and financial structures in cooperatives lead to the conflict among the stakeholders and long-term investment which have an impact on firm financial performance. As an illustration of cooperatives dilemmas, we address the question of Whether or not cooperatives in term of governance and financial structures are the constraints on implementing CSR policies. We find that the governance and financial structures in large cooperatives are the influence factors which predispose cooperatives to invest on CSR. In contrast, in the startup or small cooperatives, its governance and financial structures are the constraints on implementing CSR policies. We propose the alternative financial structure based on the trade-off between debt and equity which aims to relax the restrictions in cooperatives’ governance and allow cooperatives to acquire the capital financed either from its members or non-members. We suggest that engaging equity as a financial structure induces cooperatives to invest on CSR policies. Alternative financial structure eliminates not only cooperative ownership control problem but also the constraints in capital acquisition. By implementing CSR activities consistent with the alternative financial choice, cooperatives can increase firm’s value and reduce the conflict among their stakeholders.

Keywords: cooperatives, corporate social responsibility, financial, governance

Procedia PDF Downloads 139
3011 Comparison Study of Capital Protection Risk Management Strategies: Constant Proportion Portfolio Insurance versus Volatility Target Based Investment Strategy with a Guarantee

Authors: Olga Biedova, Victoria Steblovskaya, Kai Wallbaum

Abstract:

In the current capital market environment, investors constantly face the challenge of finding a successful and stable investment mechanism. Highly volatile equity markets and extremely low bond returns bring about the demand for sophisticated yet reliable risk management strategies. Investors are looking for risk management solutions to efficiently protect their investments. This study compares a classic Constant Proportion Portfolio Insurance (CPPI) strategy to a Volatility Target portfolio insurance (VTPI). VTPI is an extension of the well-known Option Based Portfolio Insurance (OBPI) to the case where an embedded option is linked not to a pure risky asset such as e.g., S&P 500, but to a Volatility Target (VolTarget) portfolio. VolTarget strategy is a recently emerged rule-based dynamic asset allocation mechanism where the portfolio’s volatility is kept under control. As a result, a typical VTPI strategy allows higher participation rates in the market due to reduced embedded option prices. In addition, controlled volatility levels eliminate the volatility spread in option pricing, one of the frequently cited reasons for OBPI strategy fall behind CPPI. The strategies are compared within the framework of the stochastic dominance theory based on numerical simulations, rather than on the restrictive assumption of the Black-Scholes type dynamics of the underlying asset. An extended comparative quantitative analysis of performances of the above investment strategies in various market scenarios and within a range of input parameter values is presented.

Keywords: CPPI, portfolio insurance, stochastic dominance, volatility target

Procedia PDF Downloads 167