Search results for: theoretical calculations
3667 Variation in Complement Order in English: Implications for Interlanguage Syntax
Authors: Juliet Udoudom
Abstract:
Complement ordering principles of natural language phrases (XPs) stipulate that Head terms be consistently placed phrase initially or phrase-finally, yielding two basic theoretical orders – Head – Complement order or Complement – Head order. This paper examines the principles which determine complement ordering in English V- and N-bar structures. The aim is to determine the extent to which complement linearisations in the two phrase types are consistent with the two theoretical orders outlined above given the flexible and varied nature of natural language structures. The objective is to see whether there are variation(s) in the complement linearisations of the XPs studied and the implications which such variations hold for the inter-language syntax of English and Ibibio. A corpus-based approach was employed in obtaining the English data. V- and -N – bar structures containing complement structures were isolated for analysis. Data were examined from the perspective of the X-bar and Government – theories of Chomsky’s (1981) Government-Binding format. Findings from the analysis show that in V – bar structures in English, heads are consistently placed phrase – initially yielding a Head – Complement order; however, complement linearisation in the N – bar structures studied exhibited parametric variations. Thus, in some N – bar structures in English the nominal head is ordered to the left whereas in others, the head term occurs to the right. It may therefore be concluded that the principles which determine complement ordering are both Language – Particular and Phrase – specific following insights provided within Phrasal Syntax.Keywords: complement order, complement–head order, head–complement order, language–particular principles
Procedia PDF Downloads 3463666 MHD Equilibrium Study in Alborz Tokamak
Authors: Maryamosadat Ghasemi, Reza Amrollahi
Abstract:
Plasma equilibrium geometry has a great influence on the confinement and magnetohydrodynamic stability in tokamaks. The poloidal field (PF) system of a tokamak should be able to support this plasma equilibrium geometry. In this work the prepared numerical code based on radial basis functions are presented and used to solve the Grad–Shafranov (GS) equation for the axisymmetric equilibrium of tokamak plasma. The radial basis functions (RBFs) which is a kind of numerical meshfree method (MFM) for solving partial differential equations (PDEs) has appeared in the last decade and is developing significantly in the last few years. This technique is applied in this study to obtain the equilibrium configuration for Alborz Tokamak. The behavior of numerical solution convergences show the validation of this calculations.Keywords: equilibrium, grad–shafranov, radial basis functions, Alborz Tokamak
Procedia PDF Downloads 4733665 Creativity and Intelligence: Psychoeducational Connections
Authors: Cristina Costa-Lobo, Carla B. Vestena, Filomena E. Ponte
Abstract:
Creativity and intelligence are concepts that have aroused very expressive interest in the field of educational sciences and the field of psychological science since the middle of the last century since they have a great impact on the potential and well-being of individuals. However, due to progress in cognitive and positive psychology, there has been a growing interest in the psychoeducational domain of intelligence and creativity in the last decade. In this theoretical work, are analyzed comparatively the theoretical models that relate the intelligence and the creativity, are analyzed several psychoeducational intervention programs that have been implemented with a view to the promotion of creativity and signal possibilities, realities and ironies around the psychological evaluation of intelligence and creativity. In order to reach a broad perspective on creativity, the evidence is presented that points the need to evaluate different psychological domains. The psychoeducational intervention programs addressed have, with a common characteristic, the full stimulation of the creative potential of the participants, assumed as a highly valued capacity at the present time. The results point to the systematize that all interventions in the ambit of creativity have two guiding principles: all individuals can be creative, and creativity is a capacity that can be stimulated. This work refers to the importance of stimulus creativity in educational contexts, to the usefulness and pertinence of the creation, the implementation, and monitoring of flexible curricula, adapted to the educational needs of students, promoting a collaborative work among teachers, parents, students, psychologists, managers and educational administrators.Keywords: creativity, intelligence, psychoeducational intervention programs, psychological evaluation, educational contexts
Procedia PDF Downloads 4033664 Design an Expert System to Assess the Hydraulic System in Thermal and Hydrodynamic Aspect
Authors: Ahmad Abdul-Razzak Aboudi Al-Issa
Abstract:
Thermal and Hydrodynamic are basic aspects in any hydraulic system and therefore, they must be assessed with regard to this aspect before constructing the system. This assessment needs a good expertise in this aspect to obtain an efficient hydraulic system. Therefore, this study aims to build an expert system called Hydraulic System Calculations (HSC) to ensure a smooth operation for the hydraulic system. The expert system (HSC) had been designed and coded in an user-friendly interactive program called Microsoft Visual Basic 2010. The suggested code provides the designer with a number of choices to resolve the problem of hydraulic oil overheating which may arise during the continuous operation of the hydraulic unit. As a result, the HSC can minimize the human errors, effort, time and cost of hydraulic machine design.Keywords: fluid power, hydraulic system, thermal and hydrodynamic, expert system
Procedia PDF Downloads 4433663 Efficient DCT Architectures
Authors: Mr. P. Suryaprasad, R. Lalitha
Abstract:
This paper presents an efficient area and delay architectures for the implementation of one dimensional and two dimensional discrete cosine transform (DCT). These are supported to different lengths (4, 8, 16, and 32). DCT blocks are used in the different video coding standards for the image compression. The 2D- DCT calculation is made using the 2D-DCT separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Based on the existing 1D-DCT architecture two different types of 2D-DCT architectures, folded and parallel types are implemented. Both of these two structures use the same transpose buffer. Proposed transpose buffer occupies less area and high speed than existing transpose buffer. Hence the area, low power and delay of both the 2D-DCT architectures are reduced.Keywords: transposition buffer, video compression, discrete cosine transform, high efficiency video coding, two dimensional picture
Procedia PDF Downloads 5193662 A Quantitative Plan for Drawing Down Emissions to Attenuate Climate Change
Authors: Terry Lucas
Abstract:
Calculations are performed to quantify the potential contribution of each greenhouse gas emission reduction strategy. This approach facilitates the visualisation of the relative benefits of each, and it provides a potential baseline for the development of a plan of action that is rooted in quantitative evaluation. Emissions reductions are converted to potential de-escalation of global average temperature. A comprehensive plan is then presented which shows the potential benefits all the way out to year 2100. A target temperature de-escalation of 2oC was selected, but the plan shows a benefit of only 1.225oC. This latter disappointing result is in spite of new and powerful technologies introduced into the equation. These include nuclear fusion and alternative nuclear fission processes. Current technologies such as wind, solar and electric vehicles show surprisingly small constributions to the whole.Keywords: climate change, emissions, drawdown, energy
Procedia PDF Downloads 1293661 Influence of Concrete Cracking in the Tensile Strength of Cast-in Headed Anchors
Authors: W. Nataniel, B. Lima, J. Manoel, M. P. Filho, H. Marcos, Oliveira Mauricio, P. Ferreira
Abstract:
Headed reinforcement bars are increasingly used for anchorage in concrete structures. Applications include connections in composite steel-concrete structures, such as beam-column joints, in several strengthening situations as well as in more traditional uses in cast-in-place and precast structural systems. This paper investigates the reduction in the ultimate tensile capacity of embedded cast-in headed anchors due to concrete cracking. A series of nine laboratory tests are carried out to evaluate the influence of cracking on the concrete breakout strength in tension. The experimental results show that cracking affects both the resistance and load-slip response of the headed bar anchors. The strengths measured in these tests are compared to theoretical resistances calculated following the recommendations presented by fib Bulletin no. 58 (2011), ETAG 001 (2010) and ACI 318 (2014). The influences of parameters such as the effective embedment depth (hef), bar diameter (ds), and the concrete compressive strength (fc) are analysed and discussed. The theoretical recommendations are shown to be over-conservative for both embedment depths and were, in general, inaccurate in comparison to the experimental trends. The ACI 318 (2014) was the design code which presented the best performance regarding to the predictions of the ultimate load, with an average of 1.42 for the ratio between the experimental and estimated strengths, standard deviation of 0.36, and coefficient of variation equal to 0.25.Keywords: cast-in headed anchors, concrete cone failure, uncracked concrete, cracked concrete
Procedia PDF Downloads 2033660 Airbnb, Hotel Industry and Optimum Strategies: Evidence from European Cities, Barcelona, London and Paris
Authors: Juan Pedro Aznar Alarcon, Josep Maria Sayeras Maspera
Abstract:
Airbnb and other similar platforms are offering a near substitute to the traditional accommodation service supplied by the hotel sector. In this context, hotels can try to compete by offering higher quality and additional services, which imply the need for new investments or try to compete by reducing prices. The theoretical model presented in this paper analyzes the best response using a sequential game theory model. The main conclusion is that due to the financial constraints that small and medium hotels have these hotels have reduced prices whereas hotels that belong to international groups or have an easy access to financial resources have increased their investment to increase the quality of the service provided. To check the validity of the theoretical model financial data from Barcelona, London and Paris hotels have been used analyzing profitability, quality of the service provided, the investment propensity and the evolution of the gross profit. The model and the empirical data provide the base for some industrial policy in the hospitality industry. To address the extra cost that small hotels in Europe have to face compared by bigger firms would help to improve the level of quality provided and to some extent have positive externalities in terms of job creation and an increasing added value for the industry.Keywords: Airbnb, profitability, hospitality industry, game theory
Procedia PDF Downloads 3463659 Eco-Innovation: Perspectives from a Theoretical Approach and Policy Analysis
Authors: Natasha Hazarika, Xiaoling Zhang
Abstract:
Eco- innovations, unlike regular innovations, are not self-enforcing and are associated with the double externality problem. Therefore, it is emphasized that eco-innovations need government. intervention in the form of supportive policies on priority. Off late, factors like consumer demand, technological advancement as well as the competitiveness of the firms have been considered as equally important. However, the interaction among these driving forces has not been fully traced out. Also, the theory on eco-innovation is found to be at a nascent stage which does not resonate with its dynamics as it is traditionally studied under the neo- classical economics theory. Therefore, to begin with, insights for this research have been derived from the merits of ‘neo- classical economics’, ‘evolutionary approach’, and the ‘resource based view’ which revealed the issues pertaining to technological system lock- ins and firm- based capacities which usually remained undefined by the neo classical approach; it would be followed by determining how the policies (in the national level) and their instruments are designed in order to motivate firms to eco-innovate, by analyzing the innovation ‘friendliness’ of the policy style and the policy instruments as per the indicators provided in innovation literature by means of document review (content analysis) of the relevant policies introduced by the Chinese government. The significance of theoretical analysis lies in its ability to show why certain practices become dominant irrespective of gains or losses, and that of the policy analysis lies in its ability to demonstrate the credibility of govt.’s sticks, carrots and sermons for eco-innovation.Keywords: firm competency, eco-innovation, policy, theory
Procedia PDF Downloads 1803658 Zooming into the Leadership Behaviours Desired by the 21st Century Workforce: Introduction of the Research Theory and Methods
Authors: Anita Bela, Marta Juhasz
Abstract:
Adapting to the always-changing environment comes with complex determinants. The authors are zooming into one aspect only when the current workforce comes with obstacles by being less keen to stay engaged, even short or mid-term, resulting in additional challenges impacting the business performance. Seeing these occurring in practice made the researchers eager to gain a better understanding of the reasons behind. The paper aims to provide an overview of the theoretical background and research methods planned for the different stages of the research. The theoretical part takes the leadership behaviors under lens while the focus is on finding ways to attract and retain those who prefer working under more flexible employment conditions (e.g. contractor, contingent worker, etc.). These are considered as the organizational values and along with the power of people management are having their engaging relevance. The organizational culture (visible or invisible level) is clearly the mirror of the set of shared values guiding all members of the companies towards acceptable behavior. The applied research method, inductive reasoning was selected since the focus and questions raised in this research are results of specific observations made on the employees (various employment types) and leaders of start-ups and corporates. By comparing the similarities and differences, the researchers are hoping to prove the readiness and agility of the start-up culture for the desired leadership behaviours of the current and future workforce against the corporate culture. While exploring the preferences and engaging factors of the 21st-century workforce the data gathering would happen through website analysis – using ATLAS.ti qualitative software – followed by interview sessions where demographics will be collected and preferred leadership behaviors - using the Critical Incident Technique. Moreover, a short engagement survey will be administered to understand the linkage between the organizational culture type and engagement level. To conclude, after gaining theoretical understanding, we will zoom back to the employees to reveal the behaviors to be followed to achieve engagement in an environment where nothing is stable and where the companies always must keep their agile eyes and reactions vivid.Keywords: leadership behaviours, organizational culture, qualitative analysis, workforce engagement
Procedia PDF Downloads 1143657 A Theoretical Framework for Conceptualizing Integration of Environmental Sustainability into Supplier Selection
Authors: Tonny Ograh, Joshua Ayarkwa, Dickson Osei-Asibey, Alex Acheampong, Peter Amoah
Abstract:
Theories are used to improve the conceptualization of research ideas. These theories enhance valuable elucidations that help us to grasp the meaning of research findings. Nevertheless, the use of theories to promote studies in green supplier selection in procurement decisions has attracted little attention. With the emergence of sustainable procurement, public procurement practitioners in Ghana are yet to achieve relevant knowledge on green supplier selections due to insufficient knowledge and inadequate appropriate frameworks. The flagrancy of the consequences of public procurers’ failure to integrate environmental considerations into supplier selection explains the adoption of a multi-theory approach for comprehension of the dynamics of green integration into supplier selection. In this paper, the practicality of three theories for improving the understanding of the influential factors enhancing the integration of environmental sustainability into supplier selection was reviewed. The three theories are Resource-Based Theory, Human Capital Theory and Absorptive Capacity Theory. This review uncovered knowledge management, top management commitment, and environmental management capabilities as important elements needed for the integration of environmental sustainability into supplier selection in public procurement. The theoretical review yielded a framework that conceptualizes knowledge and capabilities of practitioners relevant to the incorporation of environmental sustainability into supplier selection in public procurement.Keywords: environmental, sustainability, supplier selection, environmental procurement, sustainable procurement
Procedia PDF Downloads 1783656 The Virtues and Vices of Leader Empathy: A Review of a Misunderstood Construct
Authors: John G. Vongas, Raghid Al Hajj
Abstract:
In recent years, there has been a surge in research on empathy across disciplines ranging from management and psychology to philosophy and neuroscience. In organizational behavior, in particular, scholars have become interested in leader empathy given the rise of workplace diversity and the growing perception of leaders as managers of group emotions. It would appear that the current zeitgeist in behavioral and philosophical science is that empathy is a cornerstone of morality and that our world would be better off if only more people – and by extension, more leaders – were empathic. In spite of these claims, however, researchers have used different terminologies to explore empathy, confusing it at times with other related constructs such as emotional intelligence and compassion. Second, extant research that specifies what empathic leaders do and how their behavior affects organizational stakeholders, including themselves, does not devolve from a unifying theoretical framework. These problems plague knowledge development in this important research domain. Therefore, to the authors' best knowledge, this paper provides the first comprehensive review and synthesis of the literature on leader empathy by drawing on disparate yet complementary fields of inquiry. It clarifies empathy from other constructs and presents a theoretical model that elucidates the mechanisms by which a leader’s empathy translates into behaviors that could be either beneficial or harmful to the leaders themselves, as well as to their followers and groups. And third, it specifies the boundary conditions under which a leader’s empathy will become manifest. Finally, it suggests ways in which training could be implemented to improve empathy in practice while also remaining skeptical of its conceptualization as a moral or even effective guide in human affairs.Keywords: compassion, empathy, leadership, group outcomes
Procedia PDF Downloads 1323655 Theoretical and Experimental Analysis of Hard Material Machining
Authors: Rajaram Kr. Gupta, Bhupendra Kumar, T. V. K. Gupta, D. S. Ramteke
Abstract:
Machining of hard materials is a recent technology for direct production of work-pieces. The primary challenge in machining these materials is selection of cutting tool inserts which facilitates an extended tool life and high-precision machining of the component. These materials are widely for making precision parts for the aerospace industry. Nickel-based alloys are typically used in extreme environment applications where a combination of strength, corrosion resistance and oxidation resistance material characteristics are required. The present paper reports the theoretical and experimental investigations carried out to understand the influence of machining parameters on the response parameters. Considering the basic machining parameters (speed, feed and depth of cut) a study has been conducted to observe their influence on material removal rate, surface roughness, cutting forces and corresponding tool wear. Experiments are designed and conducted with the help of Central Composite Rotatable Design technique. The results reveals that for a given range of process parameters, material removal rate is favorable for higher depths of cut and low feed rate for cutting forces. Low feed rates and high values of rotational speeds are suitable for better finish and higher tool life.Keywords: speed, feed, depth of cut, roughness, cutting force, flank wear
Procedia PDF Downloads 2833654 Microwave Absorption Properties of Low Density Polyethelene-Cobalt Ferrite Nanocomposite
Authors: Reza Fazaeli, Reza Eslami-Farsani, Hamid Targhagh
Abstract:
Low density polyethylene (LDPE) nanocomposites with 3, 5 and 7 wt. % cobalt ferrite (CoFe2O4) nanopowder fabricated with extrusion mixing and followed up by hot press to reach compact samples. The transmission/reflection measurements were carried out with a network analyzer in the frequency range of 8-12 GHz. By increasing the percent of CoFe2O4 nanopowder, reflection loss (S11) increases, while transferring loss (S21) decreases. Reflectivity (R) calculations made using S11 and S21. Increase in percent of CoFe2O4 nanopowder up to 7 wt. % in composite leaded to higher reflectivity amount, and revealed that increasing the percent of CoFe2O4 nanopowder up to 7 wt. % leads to further microwave absorption in 8-12 GHz range.Keywords: nanocomposite, cobalt ferrite, low density polyethylene, microwave absorption
Procedia PDF Downloads 2813653 Tectonics in Sustainable Contemporary Architecture: An Approach to the Intersection between Design and Construction in the Work of Norman Foster
Authors: Mafalda Fabiene Ferreira Pantoja, Joao Da Costa Pantoja, Rui Humberto Costa De Fernandes Povoas
Abstract:
The present paper seeks to present a theoretical and practical reflection about examples of contemporary architecture in the world context where concerns about the planet become prominent and increasingly necessary. Firstly, a brief introduction will be made on the conceptual principles of tectonics in architecture in order to apply such concepts in a perspective of analysis of the intersection between design and construction in contemporary examples of Norman Foster’s architecture, once his work has demonstrated attitudes of composition that concerns about the place, technology, materials, and building life. Foster's compositions are usually focused on the role of technology in the process of architectural design, making his works a mixture of place, program, construction, and formal structures. The main purpose of the present paper is the reflection on the tools of theoretical and practical analysis about tectonics, optimizing the resources that allow cultural anchoring and creation of identity. Also establishing relation between resources, building life cycle and employment of correct materials, in order to find out how the tectonic concept can elevate the status of contemporary architecture, making it qualitative in a more sustainable context and adapted to current needs.Keywords: contemporary architecture, norman foster, tectonic, sustainable architecture
Procedia PDF Downloads 1213652 Programmed Speech to Text Summarization Using Graph-Based Algorithm
Authors: Hamsini Pulugurtha, P. V. S. L. Jagadamba
Abstract:
Programmed Speech to Text and Text Summarization Using Graph-based Algorithms can be utilized in gatherings to get the short depiction of the gathering for future reference. This gives signature check utilizing Siamese neural organization to confirm the personality of the client and convert the client gave sound record which is in English into English text utilizing the discourse acknowledgment bundle given in python. At times just the outline of the gathering is required, the answer for this text rundown. Thus, the record is then summed up utilizing the regular language preparing approaches, for example, solo extractive text outline calculationsKeywords: Siamese neural network, English speech, English text, natural language processing, unsupervised extractive text summarization
Procedia PDF Downloads 2153651 Strong Antiferromagnetic Super Exchange in AgF2
Authors: Wojciech Grochala
Abstract:
AgF2 is an important two-dimensional antiferromagnet and an analogue of [CuO2]2– sheet. However, the strength of magnetic superexchange as well as magnetic dimensionality have not been explored before . Here we report our recent Raman and neutron scattering experiments which led to better understanding of the magnetic properties of the title compound. It turns out that intra-sheet magnetic superexchange constant reaches 70 meV, thus some 2/3 of the value measured for parent compounds of oxocuprate superconductors which is over 100 meV. The ratio of intra-to-inter-sheet superexchange constants is of the order of 102 rendering AgF2 a quasi-2D material, similar to the said oxocuprates. The quantum mechanical calculations reproduce the abovementioned values quite well and they point out to substantial covalence of the Ag–F bonding. After 3 decades of intense research on layered oxocuprates, AgF2 now stands as a second-to-none analogue of these fascinating systems. It remains to be seen whether this 012 parent compound may be doped in order to achieve superconductivity.Keywords: antiferromagnets, superexchange, silver, fluorine
Procedia PDF Downloads 1273650 The Social Process of Alternative Dispute Resolution and Collective Conciliation: Unveiling the Theoretical Framework
Authors: Adejoke Yemisi Ige
Abstract:
This study presents a conceptual analysis and investigation into the development of a systematic framework required for better understanding of the social process of Alternative Dispute Resolution (ADR) and collective conciliation. The critical examination presented in this study is significant because; it draws on insight from ADR, negotiation and collective bargaining literature and applies it in our advancement of a methodical outline which gives an insight into the influence of the key actors and other stakeholder strategies and behaviours during dispute resolution in relation to the outcomes which is novel. This study is qualitative and essentially inductive in nature. One of the findings of the study confirms the need to consider ADR and collective conciliation within the context of the characteristic conditions; which focus on the need for some agreement to be reached. Another finding of the study shows the extent which information-sharing, willingness of the parties to negotiate and make concession assist both parties to attain resolution. This paper recommends that in order to overcome deadlock and attain acceptable outcomes at the end of ADR and collective conciliation, the importance of information exchange and sustenance of trade union and management relationship cannot be understated. The need for trade unions and management, the representatives to achieve their expectations in order to build the confidence and assurance of their respective constituents is essential. In conclusion, the analysis presented in this study points towards a set of factors that together can be called the social process of collective conciliation nevertheless; it acknowledges that its application to collective conciliation is new.Keywords: alternative dispute resolution, collective conciliation, social process, theoretical framework, unveiling
Procedia PDF Downloads 1523649 Empirical Analysis of Velocity Behavior for Collaborative Robots in Transient Contact Cases
Authors: C. Schneider, M. M. Seizmeir, T. Suchanek, M. Hutter-Mironovova, M. Bdiwi, M. Putz
Abstract:
In this paper, a suitable measurement setup is presented to conduct force and pressure measurements for transient contact cases at the example of lathe machine tending. Empirical measurements were executed on a selected collaborative robot’s behavior regarding allowable operating speeds under consideration of sensor- and workpiece-specific factors. Comparisons between the theoretic calculations proposed in ISO/TS 15066 and the practical measurement results reveal a basis for future research. With the created database, preliminary risk assessment and economic assessment procedures of collaborative machine tending cells can be facilitated.Keywords: biomechanical thresholds, collaborative robots, force and pressure measurements, machine tending, transient contact
Procedia PDF Downloads 2413648 Molecularly Imprinted Polymer and Computational Study of (E)-2-Cyano-3-(Dimethylamino)-N-(2,4-Dioxo-1,2,3,4-Tetrahydropyrimidin-5-Yl)Acrylam-Ide and Its Applications in Industrial Applications
Authors: Asmaa M. Fahim
Abstract:
In this investigation, the (E)-2-cyano-3-(dimethylamino)-N-(2,4-dioxo-1,2,3,4-tetrahydropyrimidin-5-yl)acrylam-ide (4) which used TAM as a template which interacts with Methacrylic Acid (MAA) monomer, in the presence of CH₃CN as progen. The TAM-MMA complex interactions are dependent on stable hydrogen bonding interaction between the carboxylic acid group of TAM(Template) and the hydroxyl group of MMA(methyl methacrylate) with minimal interference of porogen CH₃CN. The physical computational studies were used to optimize their structures and frequency calculations. The binding energies between TAM with different monomers showed the most stable molar ratio of 1:4, which was confirmed through experimental analysis. The optimized polymers were investigated in industrial applications.Keywords: molecular imprinted polymer, computational studies, SEM, spectral analysis, industrial applications
Procedia PDF Downloads 1563647 Investigation of Bremsstrahlung, Braking Radiation from Beta-Emitting Radioactive Sources
Authors: Metin Kömsöken, Ayşe Güneş Tanır, Onur Karaman
Abstract:
Usage of high energy charged particles for diagnosis and treatment has been widespread in medicine. The main purpose is to investigate that Bremsstrahlung which occurs by tissue interactions with charged particles should not be neglected. Nuclear stopping power (Bremsstrahlung) was calculated for lung, brain, skin, muscle, bone (cortical) and water targets for the energies of electrons obtained from LINAC used in radiotherapy and of β+ sources used in positron emission tomography (PET). These calculations were done by using the four different analytical functions including classical Bethe-Bloch, Tsoulfanidis, modified Bethe-Bloch and modified Tsoulfanidis equations. It was concluded that obtained results were compatible with that of National Institute of Standards and Technology (NIST-ESTAR).Keywords: β- emitting source, bremsstrahlung, therapeutic radionuclides, LINAC
Procedia PDF Downloads 3323646 Spare Part Carbon Footprint Reduction with Reman Applications
Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş
Abstract:
Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability
Procedia PDF Downloads 803645 The Influence of a Vertical Rotation on the Fluid Dynamics of Compositional Plumes
Authors: Khaled Suleiman Mohammed Al-Mashrafi
Abstract:
A compositional plume is a fluid flow in a directional channel of finite width in another fluid of different material composition. The study of the dynamics of compositional plumes plays an essential role in many real-life applications like industrial applications (e.g., iron casting), environmental applications (e.g., salt fingers and sea ice), and geophysical applications (e.g., solidification at the inner core boundary (ICB) of the Earth, and mantle plumes). The dynamics of compositional plumes have been investigated experimentally and theoretically. The experimental works observed that the plume flow seems to be stable, although some experiments showed that it can be unstable. At the same time, the theoretical investigations showed that the plume flow is unstable. This is found to be true even if the plume is subject to rotation or/and in the presence of a magnetic field and even if another plume of different composition is also present. It is noticeable that all the theoretical studies on the dynamics of compositional plumes are conducted in unbounded domains. The present work is to investigate theoretically the influence of vertical walls (boundaries) on the dynamics of compositional plumes in the absence/presence of a rotation field. The mathematical model of the dynamics of compositional plumes used the equations of continuity, motion, heat, concentration of light material, and state. It is found that the presence of boundaries has a strong influence on the basic state solution as well as the stability of the plume, particularly when the plume is close to the boundary, but the compositional plume remains unstable.Keywords: compositional plumes, stability, bounded domain, vertical boundaries
Procedia PDF Downloads 303644 Re-Imagining and De-Constructing the Global Security Architecture
Authors: Smita Singh
Abstract:
The paper develops a critical framework to the hegemonic discourses resorted to by the dominant powers in the global security architecture. Within this framework, security is viewed as a discourse through which identities and threats are represented and produced to legitimize the security concerns of few at the cost of others. International security have long been driven and dominated by power relations. Since the end of the Cold War, the global transformations have triggered contestations to the idea of security at both theoretical and practical level. These widening and deepening of the concept of security have challenged the existing power hierarchies at the theoretical level but not altered the substance and actors defining it. When discourses are introduced into security studies, several critical questions erupt: how has power shaped security policies of the globe through language? How does one understand the meanings and impact of those discourses? Who decides the agenda, rules, players and outliers of the security? Language as a symbolic system and form of power is fluid and not fixed. Over the years the dominant Western powers, led by the United States of America have employed various discursive practices such as humanitarian intervention, responsibility to protect, non proliferation, human rights, war on terror and so on to reorient the constitution of identities and interests and hence the policies that need to be adopted for its actualization. These power relations are illustrated in this paper through the narratives used in the nonproliferation regime. The hierarchical security dynamics is a manifestation of the global power relations driven by many factors including discourses.Keywords: hegemonic discourse, global security, non-proliferation regime, power politics
Procedia PDF Downloads 3173643 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 573642 Mechanical Testing of Composite Materials for Monocoque Design in Formula Student Car
Authors: Erik Vassøy Olsen, Hirpa G. Lemu
Abstract:
Inspired by the Formula-1 competition, IMechE (Institute of Mechanical Engineers) and Formula SAE (Society of Mechanical Engineers) organize annual competitions for University and College students worldwide to compete with a single-seat race car they have designed and built. The design of the chassis or the frame is a key component of the competition because the weight and stiffness properties are directly related with the performance of the car and the safety of the driver. In addition, a reduced weight of the chassis has a direct influence on the design of other components in the car. Among others, it improves the power to weight ratio and the aerodynamic performance. As the power output of the engine or the battery installed in the car is limited to 80 kW, increasing the power to weight ratio demands reduction of the weight of the chassis, which represents the major part of the weight of the car. In order to reduce the weight of the car, ION Racing team from the University of Stavanger, Norway, opted for a monocoque design. To ensure fulfilment of the above-mentioned requirements of the chassis, the monocoque design should provide sufficient torsional stiffness and absorb the impact energy in case of a possible collision. The study reported in this article is based on the requirements for Formula Student competition. As part of this study, diverse mechanical tests were conducted to determine the mechanical properties and performances of the monocoque design. Upon a comprehensive theoretical study of the mechanical properties of sandwich composite materials and the requirements of monocoque design in the competition rules, diverse tests were conducted including 3-point bending test, perimeter shear test and test for absorbed energy. The test panels were homemade and prepared with an equivalent size of the side impact zone of the monocoque, i.e. 275 mm x 500 mm so that the obtained results from the tests can be representative. Different layups of the test panels with identical core material and the same number of layers of carbon fibre were tested and compared. Influence of the core material thickness was also studied. Furthermore, analytical calculations and numerical analysis were conducted to check compliance to the stated rules for Structural Equivalency with steel grade SAE/AISI 1010. The test results were also compared with calculated results with respect to bending and torsional stiffness, energy absorption, buckling, etc. The obtained results demonstrate that the material composition and strength of the composite material selected for the monocoque design has equivalent structural properties as a welded frame and thus comply with the competition requirements. The developed analytical calculation algorithms and relations will be useful for future monocoque designs with different lay-ups and compositions.Keywords: composite material, Formula student, ION racing, monocoque design, structural equivalence
Procedia PDF Downloads 5013641 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 3593640 Competitivity in Procurement Multi-Unit Discrete Clock Auctions: An Experimental Investigation
Authors: Despina Yiakoumi, Agathe Rouaix
Abstract:
Laboratory experiments were run to investigate the impact of different design characteristics of the auctions, which have been implemented to procure capacity in the UK’s reformed electricity markets. The experiment studies competition among bidders in procurement multi-unit discrete descending clock auctions under different feedback policies and pricing rules. Theory indicates that feedback policy in combination with the two common pricing rules; last-accepted bid (LAB) and first-rejected bid (FRB), could affect significantly the auction outcome. Two information feedback policies regarding the bidding prices of the participants are considered; with feedback and without feedback. With feedback, after each round participants are informed of the number of items still in the auction and without feedback, after each round participants have no information about the aggregate supply. Under LAB, winning bidders receive the amount of the highest successful bid and under the FRB the winning bidders receive the lowest unsuccessful bid. Based on the theoretical predictions of the alternative auction designs, it was decided to run three treatments. First treatment considers LAB with feedback; second treatment studies LAB without feedback; third treatment investigates FRB without feedback. Theoretical predictions of the game showed that under FRB, the alternative feedback policies are indifferent to the auction outcome. Preliminary results indicate that LAB with feedback and FRB without feedback achieve on average higher clearing prices in comparison to the LAB treatment without feedback. However, the clearing prices under LAB with feedback and FRB without feedback are on average lower compared to the theoretical predictions. Although under LAB without feedback theory predicts the clearing price will drop to the competitive equilibrium, experimental results indicate that participants could still engage in cooperative behavior and drive up the price of the auction. It is showed, both theoretically and experimentally, that the pricing rules and the feedback policy, affect the bidding competitiveness of the auction by providing opportunities to participants to engage in cooperative behavior and exercise market power. LAB without feedback seems to be less vulnerable to market power opportunities compared to the alternative auction designs. This could be an argument for the use of LAB pricing rule in combination with limited feedback in the UK capacity market in an attempt to improve affordability for consumers.Keywords: descending clock auctions, experiments, feedback policy, market design, multi-unit auctions, pricing rules, procurement auctions
Procedia PDF Downloads 2963639 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 1113638 Determinants of Customer Value in Online Retail Platforms
Authors: Mikko Hänninen
Abstract:
This paper explores the effect online retail platforms have on customer behavior and retail patronage through an inductive multi-case study. Existing research on retail platforms and ecosystems generally focus on competition between platform members and most papers maintain a managerial perspective with customers seen mainly as merely one stakeholder of the value-exchange relationship. It is proposed that retail platforms change the nature of customer relationships compared to traditional brick-and-mortar or e-commerce retailers. With online retail platforms such as Alibaba, Amazon and Rakuten gaining increasing traction with their platform based business models, the purpose of this paper is to define retail platforms and look at how leading retail platforms are able to create value for their customers, in order to foster meaningful customer’ relationships. An analysis is conducted on the major global retail platforms with a focus specifically on understanding the tools in place for creating customer value in order to show how retail platforms create and maintain customer relationships for fostering customer loyalty. The results describe the opportunities and challenges retailers face when competing against platform based businesses and outline the advantages as well as disadvantages that platforms bring to individual consumers. Based on the inductive case research approach, five theoretical propositions on consumer behavior in online retail platforms are developed that also form the basis of further research with this research making both a practical as well as theoretical contribution to platform research streams.Keywords: retail, platform, ecosystem, e-commerce, loyalty
Procedia PDF Downloads 280