Search results for: paper and pulp mill effluent
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25130

Search results for: paper and pulp mill effluent

1190 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language

Authors: Tengku Sepora Tengku Mahadi

Abstract:

Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.

Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture

Procedia PDF Downloads 131
1189 Pareto Optimal Material Allocation Mechanism

Authors: Peter Egri, Tamas Kis

Abstract:

Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.

Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling

Procedia PDF Downloads 318
1188 Women's Parliamentary Representation in Uganda: A Relative Analysis of the Pathways of Women on the Open vs. Affirmative Action Seat

Authors: Doreen Chemutai

Abstract:

While women's parliamentary representation has increased over the years, most women contest the affirmative action seat (A.A). There is a lack of knowledge on why women prefer the affirmative seat vis- a- vis the open seat. This study argues that comparing women's path on the reserved and open seat to parliamentary representation enables us to pass judgment on why this trend continues. This paper provides a narrative analysis of women members of parliament's (MPs) trajectory in the open seat and Affirmative Action seat to parliamentary representation. Purposive sampling was used to select participants from the Northern Uganda districts of Kitgum, Pader, Oyam, Agago, and Gulu. The eight women MPs chosen for the study completed in-depth interviews exploring their qualifications, careers, and experiences before joining the political office, their party affiliation, and the kind of seat they currently occupy in the 10th parliament. Findings revealed similarities between women on the open and reserved to include; women generally irrespective of the seat they choose to contest for find it difficult to win elections because voters doubt women's effectiveness as leaders. All women as incumbents find it difficult to be re-elected because their evaluation is harsher than that for men. Findings also revealed that women representatives are motivated by their personal lived experiences, community work, educational leadership, and local leadership. The study establishes that the popularity of the party in a given geographical location and the opponents' quality will determine the success of the parliamentary candidate in question irrespective of whether one is contesting on the open or Affirmative seat. However, the study revealed differences between MPs' experiences in the quest for the parliamentary seat, females on the open seat are subjected to gender discrimination in elections by party leadership, stereotyped, and are victims of propaganda in the initial contesting stages. Women who win elections in the open seat have to be superior to their male opponents. In other circumstances where a woman emerges successful, she may be voted for due to other reasons beyond capability, such as physical appearance or sociability. On the other hand, MPs' revelations on affirmative action seats show that the political terrain is smoother despite larger constituencies. Findings show that women on the Affirmative Action seat do not move to the open seat because of the comfort associated with the seat and maintain consistency, since the constituencies doubt the motives of representatives who change from one seat to another. The study concludes that women MPs who contest on the open seat are likely to suffer structural barriers such as gender discrimination and political recruitment bias instead of women on the affirmative seat. This explains why the majority of women contest on the affirmative seat.

Keywords: affirmative action seats, open seats, parliamentary representation, pathways

Procedia PDF Downloads 136
1187 Studies on Pre-ignition Chamber Dynamics of Solid Rockets with Different Port Geometries

Authors: S. Vivek, Sharad Sharan, R. Arvind, D. V. Praveen, J. Vigneshwar, S. Ajith, V. R. Sanal Kumar

Abstract:

In this paper numerical studies have been carried out to examine the starting transient flow features of high-performance solid propellant rocket motors with different port geometries but with same propellant loading density. Numerical computations have been carried out using a 3D SST k-ω turbulence model. This code solves standard k-omega turbulence equations with shear flow corrections using a coupled second order implicit unsteady formulation. In the numerical study, a fully implicit finite volume scheme of the compressible, Reynolds-Averaged, Navier-Stokes equations are employed. We have observed from the numerical results that in solid rocket motors with highly loaded propellants having divergent port geometry the hot igniter gases can create pre-ignition thrust oscillations due to flow unsteadiness and recirculation. Under these conditions the convective flux to the surface of the propellant will be enhanced, which will create reattachment point far downstream of the transition region and it will create a situation for secondary ignition and formation of multiple-flame fronts. As a result the effective time required for the complete burning surface area to be ignited comes down drastically giving rise to a high pressurization rate (dp/dt) in the second phase of starting transient. This in effect could lead to starting thrust oscillations and eventually a hard start of the solid rocket motor. We have also observed that the igniter temperature fluctuations will be diminished rapidly and will reach the steady state value faster in the case of solid propellant rocket motors with convergent port than the divergent port irrespective of the igniter total pressure. We have concluded that the thrust oscillations and unexpected thrust spike often observed in solid rockets with non-uniform ports are presumably contributed due to the joint effects of the geometry dependent driving forces, transient burning and the chamber gas dynamics forces. We also concluded that the prudent selection of the port geometry, without altering the propellant loading density, for damping the total temperature fluctuations within the motor is a meaningful objective for the suppression and control of instability and/or pressure/thrust oscillations often observed in solid propellant rocket motors with non-uniform port geometry.

Keywords: ignition transient, solid rockets, starting transient, thrust transient

Procedia PDF Downloads 430
1186 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 86
1185 Stress Corrosion Crackings Test of Candidate Materials in Support of the Development of the European Small Modular Supercritical Water Cooled Rector Concept

Authors: Radek Novotny, Michal Novak, Daniela Marusakova, Monika Sipova, Hugo Fuentes, Peter Borst

Abstract:

This research has been conducted within the European HORIZON 2020 project ECC-SMART. The main objective is to assess whether it is feasible to design and develop a small modular reactor (SMR) that would be cooled by supercritical water (SCW). One of the main objectives for material research concerns the corrosion of the candidate cladding materials. The experimental part has been conducted in support of the qualification procedure of the future SCW-SMR constructional materials. The last objective was to identify the gaps in current norms and guidelines. Apart from corrosion, resistance testing of candidate materials stresses corrosion cracking susceptibility tests have been performed in supercritical water. This paper describes part of these tests, in particular, those slow strain rate tensile loading applied for tangential ring shape specimens of two candidate materials, Alloy 800H and 310S stainless steel. These ring tensile tests are one the methods used for tensile testing of nuclear cladding. Here full circular heads with dimensions roughly equal to the inner diameter of the sample and the gage sections are placed in the parallel direction to the applied load. Slow strain rate tensile tests have been conducted in 380 or 500oC supercritical water applying two different elongation rates, 1x10-6 and 1x10-7 s-1. The effect of temperature and dissolved oxygen content on the SCC susceptibility of Alloy 800H and 310S stainless steel was investigated when two different temperatures and concentrations of dissolved oxygen were applied in supercritical water. The post-fracture analysis includes fractographic analysis of the fracture surfaces using SEM as well as cross-sectional analysis on the occurrence of secondary cracks. Assessment of the effect of environment and dissolved oxygen content was by comparing to the results of the reference tests performed in air and N2 gas overpressure. The effect of high temperature on creep and its role in the initiation of SCC was assessed as well. It has been concluded that the applied test method could be very useful for the investigation of stress corrosion cracking susceptibility of candidate cladding materials in supercritical water.

Keywords: stress corrosion cracking, ring tensile tests, super-critical water, alloy 800H, 310S stainless steel

Procedia PDF Downloads 71
1184 Between the House and the City: An Investigation of the Structure of the Family/Society and the Role of the Public Housing in Tokyo and Berlin

Authors: Abudjana Babiker

Abstract:

The middle of twenty century witnessed an explosion in public housing. After the great depression, some of the capitalists and communist countries have launched policies and programs to produce public housing in the urban areas. Concurrently, modernity was the leading architecture style at the time excessively supported the production, and principally was the instrument for the success of the public housing program due to the modernism manifesto for manufactured architecture as an international style that serves the society and parallelly connect it to the other design industries which allowed for the production of the architecture elements. After the second world war, public housing flourished, especially in communist’s countries. The idea of public housing was conceived as living spaces at the time, while the Workplaces performed as the place for production and labor. Michel Foucault - At the end of the twenty century- the introduction of biopolitics has had highlighted the alteration in the production and labor inter-function. The house does not precisely perform as the sanctuary, from the production, for the family, it opens the house to be -part of the city as- a space for production, not only to produce objects but to reproduce the family as a total part of the production mechanism in the city. While the public housing kept altering from one country to another after the failure of the modernist’s public housing in the late 1970s, the society continued changing parallelly with the socio-economic condition in each political-economical system, and the public housing thus followed. The family structure in the major cities has been dramatically changing, single parenting and the long working hours, for instance, have been escalating the loneliness in the major cities such as London, Berlin, and Tokyo and the public housing for the families is no longer suits the single lifestyle for the individuals. This Paper investigates the performance of both the single/individual lifestyle and the family/society structure in Tokyo and Berlin in a relation to the utilization of public housing under economical policies and the socio-political environment that produced the individuals and the collective. The study is carried through the study of the undercurrent individual/society and case studies to examine the performance of the utilization of the housing. The major finding is that the individual/collective are revolving around the city; the city identified and acts as a system that magnetized and blurred the line between production and reproduction lifestyle. The mass public housing for families is shifting to be a combination between neo-liberalism and socialism housing.

Keywords: loneliness, production reproduction, work live, publichousing

Procedia PDF Downloads 172
1183 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data

Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding

Abstract:

The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.

Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)

Procedia PDF Downloads 138
1182 Biopolymers: A Solution for Replacing Polyethylene in Food Packaging

Authors: Sonia Amariei, Ionut Avramia, Florin Ursachi, Ancuta Chetrariu, Ancuta Petraru

Abstract:

The food industry is one of the major generators of plastic waste derived from conventional synthetic petroleum-based polymers, which are non-biodegradable, used especially for packaging. These packaging materials, after the food is consumed, accumulate serious environmental concerns due to the materials but also to the organic residues that adhere to them. It is the concern of specialists, researchers to eliminate problems related to conventional materials that are not biodegradable or unnecessary plastic and replace them with biodegradable and edible materials, supporting the common effort to protect the environment. Even though environmental and health concerns will cause more consumers to switch to a plant-based diet, most people will continue to add more meat to their diet. The paper presents the possibility of replacing the polyethylene packaging from the surface of the trays for meat preparations with biodegradable packaging obtained from biopolymers. During the storage of meat products may occur deterioration by lipids oxidation and microbial spoilage, as well as the modification of the organoleptic characteristics. For this reason, different compositions of polymer mixtures and film conditions for obtaining must be studied to choose the best packaging material to achieve food safety. The compositions proposed for packaging are obtained from alginate, agar, starch, and glycerol as plasticizers. The tensile strength, elasticity, modulus of elasticity, thickness, density, microscopic images of the samples, roughness, opacity, humidity, water activity, the amount of water transferred as well as the speed of water transfer through these packaging materials were analyzed. A number of 28 samples with various compositions were analyzed, and the results showed that the sample with the highest values for hardness, density, and opacity, as well as the smallest water vapor permeability, of 1.2903E-4 ± 4.79E-6, has the ratio of components as alginate: agar: glycerol (3:1.25:0.75). The water activity of the analyzed films varied between 0.2886 and 0.3428 (aw< 0.6), demonstrating that all the compositions ensure the preservation of the products in the absence of microorganisms. All the determined parameters allow the appreciation of the quality of the packaging films in terms of mechanical resistance, its protection against the influence of light, the transfer of water through the packaging. Acknowledgments: This work was supported by a grant of the Ministry of Research, Innovation, and Digitization, CNCS/CCCDI – UEFISCDI, project number PN-III-P2-2.1-PED-2019-3863, within PNCDI III.

Keywords: meat products, alginate, agar, starch, glycerol

Procedia PDF Downloads 152
1181 Unveiling Adorno’s Concern for Revolutionary Praxis and Its Enduring Significance: A Philosophical Analysis of His Writings on Sociology and Philosophy

Authors: Marie-Josee Lavallee

Abstract:

Adorno’s reputation as an abstract and pessimistic thinker who indulged in a critic of capitalist society and culture without bothering himself with opening prospects for change, and who has no interest in political activism, recently begun to be questioned. This paper, which has a twofold objective, will push revisionist readings a step further by putting forward the thesis that revolutionary praxis has been an enduring concern for Adorno, surfacing throughout his entire work. On the other hand, it will hold that his understanding of the relationships between theory and praxis, which will be explained by referring to Ernst Bloch’s distinction between the warm and cold currents of Marxism, can help to interpret the paralysis of revolutionary practice in our own time under a new light. Philosophy and its tasks have been an enduring topic of Adorno’s work from the 1930s to Negativ Dialektik. The writings in which he develops these ideas stand among his most obscure and abstract so that their strong ties to the political have remained mainly overlooked. Adorno’s undertaking of criticizing and ‘redeeming’ philosophy and metaphysics is inseparable from a care for retrieving the capacity to act in the world and to change it. Philosophical problems are immanent to sociological problems, and vice versa, he underlines in his Metaphysik. Begriff and Problem. The issue of truth cannot be severed from the contingent context of a given idea. As a critical undertaking extracting its contents from reality, which is what philosophy should be from Adorno's perspective, the latter has the potential to fully reveal the reification of the individual and consciousness resulting from capitalist economic and cultural domination, thus opening the way to resistance and revolutionary change. While this project, according to his usual method, is sketched mainly in negative terms, it also exhibits positive contours which depict a socialist society. Only in the latter could human suffering end, and mutilated individuals experiment with reconciliation in an authentic way. That Adorno’s continuous plea for philosophy’s self-critic and renewal hides an enduring concern for revolutionary praxis emerges clearly from a careful philosophical analysis of his writings on philosophy and a selection of his sociological work, coupled with references to his correspondences. This study points to the necessity of a serious re-evaluation of Adorno’s relationship to the political, which will impact on the interpretation of his whole oeuvre, is much needed. In the second place, Adorno's dialectical conception of theory and praxis is enlightening for our own time, since it suggests that we are experiencing a phase of creative latency rather an insurmountable impasse.

Keywords: Frankfurt school, philosophy and revolution, revolutionary praxis, Theodor W. Adorno

Procedia PDF Downloads 107
1180 Promoting 'One Health' Surveillance and Response Approach Implementation Capabilities against Emerging Threats and Epidemics Crisis Impact in African Countries

Authors: Ernest Tambo, Ghislaine Madjou, Jeanne Y. Ngogang, Shenglan Tang, Zhou XiaoNong

Abstract:

Implementing national to community-based 'One Health' surveillance approach for human, animal and environmental consequences mitigation offers great opportunities and value-added in sustainable development and wellbeing. 'One Health' surveillance approach global partnerships, policy commitment and financial investment are much needed in addressing the evolving threats and epidemics crises mitigation in African countries. The paper provides insights onto how China-Africa health development cooperation in promoting “One Health” surveillance approach in response advocacy and mitigation. China-Africa health development initiatives provide new prospects in guiding and moving forward appropriate and evidence-based advocacy and mitigation management approaches and strategies in attaining Universal Health Coverage (UHC) and Sustainable Development Goals (SDGs). Early and continuous quality and timely surveillance data collection and coordinated information sharing practices in malaria and other diseases are demonstrated in Comoros, Zanzibar, Ghana and Cameroon. Improvements of variety of access to contextual sources and network of data sharing platforms are needed in guiding evidence-based and tailored detection and response to unusual hazardous events. Moreover, understanding threats and diseases trends, frontline or point of care response delivery is crucial to promote integrated and sustainable targeted local, national “One Health” surveillance and response approach needs implementation. Importantly, operational guidelines are vital in increasing coherent financing and national workforce capacity development mechanisms. Strengthening participatory partnerships, collaboration and monitoring strategies in achieving global health agenda effectiveness in Africa. At the same enhancing surveillance data information streams reporting and dissemination usefulness in informing policies decisions, health systems programming and financial mobilization and prioritized allocation pre, during and post threats and epidemics crises programs strengths and weaknesses. Thus, capitalizing on “One Health” surveillance and response approach advocacy and mitigation implementation is timely in consolidating Africa Union 2063 agenda and Africa renaissance capabilities and expectations.

Keywords: Africa, one health approach, surveillance, response

Procedia PDF Downloads 405
1179 Comparative Quantitative Study on Learning Outcomes of Major Study Groups of an Information and Communication Technology Bachelor Educational Program

Authors: Kari Björn, Mikael Soini

Abstract:

Higher Education system reforms, especially Finnish system of Universities of Applied Sciences in 2014 are discussed. The new steering model is based on major legislative changes, output-oriented funding and open information. The governmental steering reform, especially the financial model and the resulting institutional level responses, such as a curriculum reforms are discussed, focusing especially in engineering programs. The paper is motivated by management need to establish objective steering-related performance indicators and to apply them consistently across all educational programs. The close relationship to governmental steering and funding model imply that internally derived indicators can be directly applied. Metropolia University of Applied Sciences (MUAS) as a case institution is briefly introduced, focusing on engineering education in Information and Communications Technology (ICT), and its related programs. The reform forced consolidation of previously separate smaller programs into fewer units of student application. New curriculum ICT students have a common first year before they apply for a Major. A framework of parallel and longitudinal comparisons is introduced and used across Majors in two campuses. The new externally introduced performance criteria are applied internally on ICT Majors using data ex-ante and ex-post of program merger.  A comparative performance of the Majors after completion of joint first year is established, focusing on previously omitted Majors for completeness of analysis. Some new research questions resulting from transfer of Majors between campuses and quota setting are discussed. Practical orientation identifies best practices to share or targets needing most attention for improvement. This level of analysis is directly applicable at student group and teaching team level, where corrective actions are possible, when identified. The analysis is quantitative and the nature of the corrective actions are not discussed. Causal relationships and factor analysis are omitted, because campuses, their staff and various pedagogical implementation details contain still too many undetermined factors for our limited data. Such qualitative analysis is left for further research. Further study must, however, be guided by the relevance of the observations.

Keywords: engineering education, integrated curriculum, learning outcomes, performance measurement

Procedia PDF Downloads 221
1178 Intellectual Property Rights (IPR) in the Relations among Nations: Towards a Renewed Hegemony or Not

Authors: Raju K. Thadikkaran

Abstract:

Introduction: The IPR have come to the centre stage of development discourse today for a variety of reasons: It ranges from the arbitrariness in the enforcement, overlapping and mismatch with various international agreements and conventions, divergence in the definition, nature and content and the duration as well as severe adverse consequences to technologically weak developing countries. In turn, the IPR have acquired prominence in the foreign policy making as well as in the relations among nations. Quite naturally, there is ample scope for an examination of the correlation between Technology, IPR and International Relations in the contemporary world. Nature and Scope: A cursory examination of the realm of IPR and its protection shall reveals the acute divergence that exists in the perspectives, on all matters related to the very definition, nature, content, scope and duration. The proponents of stronger protection, mostly technologically advanced countries, insist on a stringent IP Regime whereas technologically weak developing countries seem to advocate for flexibilities. From the perspective of developing countries like India, one of the most crucial concerns is related to the patenting of life forms and the protection of TK and BD. There have been several instances of Bio-piracy and Bio-prospecting of the resources related to BD and TK from the Bio-rich Global South. It is widely argued that many provisions in the TRIPS are capable of offsetting the welcome provisions in the CBD such as the Access and Benefit Sharing and Prior Informed Consent. The point that is being argued out is as to how the mismatch between the provisions in the TRIPS Agreement and the CBD could be addressed in a healthy manner so that the essential minimum legitimate interests of all stakeholders could be secured thereby introducing a new direction to the international relations. The findings of this study reveal that the challenges roused by the TRIPS Regime over-weigh the opportunities. The mismatch in the provisions in this regard has generated various crucial issues such as Bio-piracy and Bio-prospecting. However, there is ample scope for managing and protecting IP through institutional innovation, legislative, executive and administrative initiative at the global, national and regional levels. The Indian experience is quite reflective of the same and efforts are being made through the new national IPR policy. This paper, employing Historical Analytical Method, has Three Sections. The First Section shall trace the correlation between the Technology, IPR and international relations. The Second Section shall review the issues and potential concerns in the protection and management of IP related to the BD and TK in the developing countries in the wake of the TRIPS and the CBD. The Final Section shall analyze the Indian Experience in this regard and the experience of the bio-rich Kerala in particular.

Keywords: IPR, technology and international relations, bio-diversity, traditional knowledge

Procedia PDF Downloads 360
1177 Stimulus-Response and the Innateness Hypothesis: Childhood Language Acquisition of “Genie”

Authors: Caroline Kim

Abstract:

Scholars have long disputed the relationship between the origins of language and human behavior. Historically, behaviorist psychologist B. F. Skinner argued that language is one instance of the general stimulus-response phenomenon that characterizes the essence of human behavior. Another, more recent approach argues, by contrast, that language is an innate cognitive faculty and does not arise from behavior, which might develop and reinforce linguistic facility but is not its source. Pinker, among others, proposes that linguistic defects arise from damage to the brain, both congenital and acquired in life. Much of his argument is based on case studies in which damage to the Broca’s and Wernicke’s areas of the brain results in loss of the ability to produce coherent grammatical expressions when speaking or writing; though affected speakers often utter quite fluent streams of sentences, the words articulated lack discernible semantic content. Pinker concludes on this basis that language is an innate component of specific, classically language-correlated regions of the human brain. Taking a notorious 1970s case of linguistic maladaptation, this paper queries the dominant materialist paradigm of language-correlated regions. Susan “Genie” Wiley was physically isolated from language interaction in her home and beaten by her father when she attempted to make any sort of sound. Though without any measurable resulting damage to the brain, Wiley was never able to develop the level of linguistic facility normally achieved in adulthood. Having received a negative reinforcement of language acquisition from her father and lacking the usual language acquisition period, in adulthood Wiley was able to develop language only at a quite limited level in later life. From a contemporary behaviorist perspective, this case confirms the possibility of language deficiency without brain pathology. Wiley’s potential language-determining areas in the brain were intact, and she was exposed to language later in her life, but she was unable to achieve the normal level of communication skills, deterring socialization. This phenomenon and others like it in the case limited literature on linguistic maladaptation pose serious clinical, scientific, and indeed philosophical difficulties for both of the major competing theories of language acquisition, innateness, and linguistic stimulus-response. The implications of such cases for future research in language acquisition are explored, with a particular emphasis on the interaction of innate capacity and stimulus-based development in early childhood.

Keywords: behaviorism, innateness hypothesis, language, Susan "Genie" Wiley

Procedia PDF Downloads 280
1176 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 272
1175 Geographic Legacies for Modern Day Disease Research: Autism Spectrum Disorder as a Case-Control Study

Authors: Rebecca Richards Steed, James Van Derslice, Ken Smith, Richard Medina, Amanda Bakian

Abstract:

Elucidating gene-environment interactions for heritable disease outcomes is an emerging area of disease research, with genetic studies informing hypotheses for environment and gene interactions underlying some of the most confounding diseases of our time, like autism spectrum disorder (ASD). Geography has thus far played a key role in identifying environmental factors contributing to disease, but its use can be broadened to include genetic and environmental factors that have a synergistic effect on disease. Through the use of family pedigrees and disease outcomes with life-course residential histories, space-time clustering of generations at critical developmental windows can provide further understanding of (1) environmental factors that contribute to disease patterns in families, (2) susceptible critical windows of development most impacted by environment, (3) and that are most likely to lead to an ASD diagnosis. This paper introduces a retrospective case-control study that utilizes pedigree data, health data, and residential life-course location points to find space-time clustering of ancestors with a grandchild/child with a clinical diagnosis of ASD. Finding space-time clusters of ancestors at critical developmental windows serves as a proxy for shared environmental exposures. The authors refer to geographic life-course exposures as geographic legacies. Identifying space-time clusters of ancestors creates a bridge for researching exposures of past generations that may impact modern-day progeny health. Results from the space-time cluster analysis show multiple clusters for the maternal and paternal pedigrees. The paternal grandparent pedigree resulted in the most space-time clustering for birth and childhood developmental windows. No statistically significant clustering was found for adolescent years. These results will be further studied to identify the specific share of space-time environmental exposures. In conclusion, this study has found significant space-time clusters of parents, and grandparents for both maternal and paternal lineage. These results will be used to identify what environmental exposures have been shared with family members at critical developmental windows of time, and additional analysis will be applied.

Keywords: family pedigree, environmental exposure, geographic legacy, medical geography, transgenerational inheritance

Procedia PDF Downloads 103
1174 Analysis of Ancient and Present Lightning Protection Systems of Large Heritage Stupas in Sri Lanka

Authors: J.R.S.S. Kumara, M.A.R.M. Fernando, S.Venkatesh, D.K. Jayaratne

Abstract:

Protection of heritage monuments against lightning has become extremely important as far as their historical values are concerned. When such structures are large and tall, the risk of lightning initiated from both cloud and ground can be high. This paper presents a lightning risk analysis of three giant stupas in Anuradhapura era (fourth century BC onwards) in Sri Lanka. The three stupas are Jethawaaramaya (269-296 AD), Abayagiriya (88-76 BC) and Ruwanweliseya (161-137 BC), the third, fifth and seventh largest ancient structures in the world. These stupas are solid brick structures consisting of a base, a near hemispherical dome and a conical spire on the top. The ancient stupas constructed with a dielectric crystal on the top and connected to the ground through a conducting material, was considered as the hypothesis for their original lightning protection technique. However, at present, all three stupas are protected with Franklin rod type air termination systems located on top of the spire. First, a risk analysis was carried out according to IEC 62305 by considering the isokeraunic level of the area and the height of the stupas. Then the standard protective angle method and rolling sphere method were used to locate the possible touching points on the surface of the stupas. The study was extended to estimate the critical current which could strike on the unprotected areas of the stupas. The equations proposed by (Uman 2001) and (Cooray2007) were used to find the striking distances. A modified version of rolling sphere method was also applied to see the effects of upward leaders. All these studies were carried out for two scenarios: with original (i.e. ancient) lightning protection system and with present (i.e. new) air termination system. The field distribution on the surface of the stupa in the presence of a downward leader was obtained using finite element based commercial software COMSOL Multiphysics for further investigations of lightning risks. The obtained results were analyzed and compared each other to evaluate the performance of ancient and new lightning protection methods and identify suitable methods to design lightning protection systems for stupas. According to IEC standards, all three stupas with new and ancient lightning protection system has Level IV protection as per protection angle method. However according to rolling sphere method applied with Uman’s equation protection level is III. The same method applied with Cooray’s equation always shows a high risk with respect to Uman’s equation. It was found that there is a risk of lightning strikes on the dome and square chamber of the stupa, and the corresponding critical current values were different with respect to the equations used in the rolling sphere method and modified rolling sphere method.

Keywords: Stupa, heritage, lightning protection, rolling sphere method, protection level

Procedia PDF Downloads 227
1173 The Impact of Undisturbed Flow Speed on the Correlation of Aerodynamic Coefficients as a Function of the Angle of Attack for the Gyroplane Body

Authors: Zbigniew Czyz, Krzysztof Skiba, Miroslaw Wendeker

Abstract:

This paper discusses the results of aerodynamic investigation of the Tajfun gyroplane body designed by a Polish company, Aviation Artur Trendak. This gyroplane has been studied as a 1:8 scale model. Scaling objects for aerodynamic investigation is an inherent procedure in any kind of designing. If scaling, the criteria of similarity need to be satisfied. The basic criteria of similarity are geometric, kinematic and dynamic. Despite the results of aerodynamic research are often reduced to aerodynamic coefficients, one should pay attention to how values of coefficients behave if certain criteria are to be satisfied. To satisfy the dynamic criterion, for example, the Reynolds number should be focused on. This is the correlation of inertial to viscous forces. With the multiplied flow speed by the specific dimension as a numerator (with a constant kinematic viscosity coefficient), flow speed in a wind tunnel research should be increased as many times as an object is decreased. The aerodynamic coefficients specified in this research depend on the real forces that act on an object, its specific dimension, medium speed and variations in its density. Rapid prototyping with a 3D printer was applied to create the research object. The research was performed with a T-1 low-speed wind tunnel (its diameter of the measurement volume is 1.5 m) and a six-element aerodynamic internal scales, WDP1, at the Institute of Aviation in Warsaw. This T-1 wind tunnel is low-speed continuous operation with open space measurement. The research covered a number of the selected speeds of undisturbed flow, i.e. V = 20, 30 and 40 m/s, corresponding to the Reynolds numbers (as referred to 1 m) Re = 1.31∙106, 1.96∙106, 2.62∙106 for the angles of attack ranging -15° ≤ α ≤ 20°. Our research resulted in basic aerodynamic characteristics and observing the impact of undisturbed flow speed on the correlation of aerodynamic coefficients as a function of the angle of attack of the gyroplane body. If the speed of undisturbed flow in the wind tunnel changes, the aerodynamic coefficients are significantly impacted. At speed from 20 m/s to 30 m/s, drag coefficient, Cx, changes by 2.4% up to 9.9%, whereas lift coefficient, Cz, changes by -25.5% up to 15.7% if the angle of attack of 0° excluded or by -25.5% up to 236.9% if the angle of attack of 0° included. Within the same speed range, the coefficient of a pitching moment, Cmy, changes by -21.1% up to 7.3% if the angles of attack -15° and -10° excluded or by -142.8% up to 618.4% if the angle of attack -15° and -10° included. These discrepancies in the coefficients of aerodynamic forces definitely need to consider while designing the aircraft. For example, if load of certain aircraft surfaces is calculated, additional correction factors definitely need to be applied. This study allows us to estimate the discrepancies in the aerodynamic forces while scaling the aircraft. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: aerodynamics, criteria of similarity, gyroplane, research tunnel

Procedia PDF Downloads 377
1172 Narcissism and Kohut's Self-Psychology: Self Practices in Service of Self-Transcendence

Authors: Noelene Rose

Abstract:

The DSM has been plagued with conceptual issues since its inception, not least discriminant validity and comorbidity issues. An attempt to remain a-theoretical in the divide between the psycho-dynamicists and the behaviourists contributed to much of this, in particular relating to the Personality Disorders. With the DSM-5, although the criterion have remained unchanged, major conceptual and structural directions have been flagged and proposed in section III. The biggest changes concern the Personality Disorders. While Narcissistic Personality Disorder (NPD) was initially tagged for removal, instead the addition of section III proposes a move away from a categorical approach to a more dimensional approach, with a measure of Global Function of Personality. This global measure is an assessment of impairment of self-other relations; a measure of trait narcissism. In the same way mainstream psychology has struggled in its diagnosis of narcissism, so too in its treatment. Kohut’s self psychology represents the most significant inroad in theory and treatment for the narcissistic disorders. Kohut had moved away from a categorical system, towards disorders of the self. According to this theory, disorders of the self are the result of childhood trauma (impaired attunement) resulting in a developmental arrest. Self-psychological, Psychodynamic treatment of narcissism, however, is expensive, in time and money and outside the awareness or access of most people. There is more than a suggestion that narcissism is on the increase, created in trauma and worsened by a fearful world climate. A dimensional model of narcissism, from mild to severe, requires cut off points for diagnosis. But where do we draw the line? Mainstream psychology is inclined to set it high when there is some degree of impairment in functioning in daily life. Transpersonal Psychology is inclined to set it low, with the concept that we all have some degree of narcissism and that it is the point and the path of our life journey to transcend our focus on our selves. Mainstream psychology stops its focus on trait narcissism with a healthy level of self esteem, but it is at this point that Transpersonal Psychology can complement the discussion. From a Transpersonal point of view, failure to begin the process of self-transcendence will also create emotional symptoms of meaning or purpose, often later in our lives, and is also conceived of as a developmental arrest. The maps for this transcendence are hidden in plain sight; in the chakras of kundalini yoga, in the sacraments of the Catholic Church, in the Kabbalah tree of life of Judaism, in Maslow’s hierarchy of needs, to name a few. This paper outlines some proposed research exploring the use of daily practices that can be incorporated into the therapy room; practices that utilise meditation, visualisation and imagination: that are informed by spiritual technology and guided by the psychodynamic theory of Self Psychology.

Keywords: narcissism, self-psychology, self-practice, self-transcendence

Procedia PDF Downloads 245
1171 A Static and Dynamic Slope Stability Analysis of Sonapur

Authors: Rupam Saikia, Ashim Kanti Dey

Abstract:

Sonapur is an intense hilly region on the border of Assam and Meghalaya lying in North-East India and is very near to a seismic fault named as Dauki besides which makes the region seismically active. Besides, these recently two earthquakes of magnitude 6.7 and 6.9 have struck North-East India in January and April 2016. Also, the slope concerned for this study is adjacent to NH 44 which for a long time has been a sole important connecting link to the states of Manipur and Mizoram along with some parts of Assam and so has been a cause of considerable loss to life and property since past decades as there has been several recorded incidents of landslide, road-blocks, etc. mostly during the rainy season which comes into news. Based on this issue this paper reports a static and dynamic slope stability analysis of Sonapur which has been carried out in MIDAS GTS NX. The slope being highly unreachable due to terrain and thick vegetation in-situ test was not feasible considering the current scope available so disturbed soil sample was collected from the site for the determination of strength parameters. The strength parameters were so determined for varying relative density with further variation in water content. The slopes were analyzed considering plane strain condition for three slope heights of 5 m, 10 m and 20 m which were then further categorized based on slope angles 30, 40, 50, 60, and 70 considering the possible extent of steepness. Initially static analysis under dry state was performed then considering the worst case that can develop during rainy season the slopes were analyzed for fully saturated condition along with partial degree of saturation with an increase in the waterfront. Furthermore, dynamic analysis was performed considering the El-Centro Earthquake which had a magnitude of 6.7 and peak ground acceleration of 0.3569g at 2.14 sec for the slope which were found to be safe during static analysis under both dry and fully saturated condition. Some of the conclusions were slopes with inclination above 40 onwards were found to be highly vulnerable for slopes of height 10 m and above even under dry static condition. Maximum horizontal displacement showed an exponential increase with an increase in inclination from 30 to 70. The vulnerability of the slopes was seen to be further increased during rainy season as even slopes of minimal steepness of 30 for height 20 m was seen to be on the verge of failure. Also, during dynamic analysis slopes safe during static analysis were found to be highly vulnerable. Lastly, as a part of the study a comparative study on Strength Reduction Method (SRM) versus Limit Equilibrium Method (LEM) was also carried out and some of the advantages and disadvantages were figured out.

Keywords: dynamic analysis, factor of safety, slope stability, strength reduction method

Procedia PDF Downloads 248
1170 Investigation of Rehabilitation Effects on Fire Damaged High Strength Concrete Beams

Authors: Eun Mi Ryu, Ah Young An, Ji Yeon Kang, Yeong Soo Shin, Hee Sun Kim

Abstract:

As the number of fire incidents has been increased, fire incidents significantly damage economy and human lives. Especially when high strength reinforced concrete is exposed to high temperature due to a fire, deterioration occurs such as loss in strength and elastic modulus, cracking, and spalling of the concrete. Therefore, it is important to understand risk of structural safety in building structures by studying structural behaviors and rehabilitation of fire damaged high strength concrete structures. This paper aims at investigating rehabilitation effect on fire damaged high strength concrete beams using experimental and analytical methods. In the experiments, flexural specimens with high strength concrete are exposed to high temperatures according to ISO 834 standard time temperature curve. After heated, the fire damaged reinforced concrete (RC) beams having different cover thicknesses and fire exposure time periods are rehabilitated by removing damaged part of cover thickness and filling polymeric mortar into the removed part. From four-point loading test, results show that maximum loads of the rehabilitated RC beams are 1.8~20.9% higher than those of the non-fire damaged RC beam. On the other hand, ductility ratios of the rehabilitated RC beams are decreased than that of the non-fire damaged RC beam. In addition, structural analyses are performed using ABAQUS 6.10-3 with same conditions as experiments to provide accurate predictions on structural and mechanical behaviors of rehabilitated RC beams. For the rehabilitated RC beam models, integrated temperature–structural analyses are performed in advance to obtain geometries of the fire damaged RC beams. After spalled and damaged parts are removed, rehabilitated part is added to the damaged model with material properties of polymeric mortar. Three dimensional continuum brick elements are used for both temperature and structural analyses. The same loading and boundary conditions as experiments are implemented to the rehabilitated beam models and nonlinear geometrical analyses are performed. Structural analytical results show good rehabilitation effects, when the result predicted from the rehabilitated models are compared to structural behaviors of the non-damaged RC beams. In this study, fire damaged high strength concrete beams are rehabilitated using polymeric mortar. From four point loading tests, it is found that such rehabilitation is able to make the structural performance of fire damaged beams similar to non-damaged RC beams. The predictions from the finite element models show good agreements with the experimental results and the modeling approaches can be used to investigate applicability of various rehabilitation methods for further study.

Keywords: fire, high strength concrete, rehabilitation, reinforced concrete beam

Procedia PDF Downloads 433
1169 Retrieving Iconometric Proportions of South Indian Sculptures Based on Statistical Analysis

Authors: M. Bagavandas

Abstract:

Introduction: South Indian stone sculptures are known for their elegance and history. They are available in large numbers in different monuments situated different parts of South India. These art pieces have been studied using iconography details, but this pioneering study introduces a novel method known as iconometry which is a quantitative study that deals with measurements of different parts of icons to find answers for important unanswered questions. The main aim of this paper is to compare iconometric measurements of the sculptures with canonical proportion to determine whether the sculptors of the past had followed any of the canonical proportions prescribed in the ancient text. If not, this study recovers the proportions used for carving sculptures which is not available to us now. Also, it will be interesting to see how these sculptural proportions of different monuments belonging to different dynasties differ from one another in terms these proportions. Methods and Materials: As Indian sculptures are depicted in different postures, one way of making measurements independent of size, is to decode on a suitable measurement and convert the other measurements as proportions with respect to the chosen measurement. Since in all canonical texts of Indian art, all different measurements are given in terms of face length, it is chosen as the required measurement for standardizing the measurements. In order to compare these facial measurements with measurements prescribed in Indian canons of Iconography, the ten facial measurements like face length, morphological face length, nose length, nose-to-chin length, eye length, lip length, face breadth, nose breadth, eye breadth and lip breadth were standardized using the face length and the number of measurements reduced to nine. Each measurement was divided by the corresponding face length and multiplied by twelve and given in angula unit used in the canonical texts. The reason for multiplying by twelve is that the face length is given as twelve angulas in the canonical texts for all figures. Clustering techniques were used to determine whether the sculptors of the past had followed any of the proportions prescribed in the canonical texts of the past to carve sculptures and also to compare the proportions of sculptures of different monuments. About one hundred twenty-seven stone sculptures from four monuments belonging to the Pallava, the Chola, the Pandya and the Vijayanagar dynasties were taken up for this study. These art pieces belong to a period ranging from the eighth to the sixteenth century A.D. and all of them adorning different monuments situated in different parts of Tamil Nadu State, South India. Anthropometric instruments were used for taking measurements and the author himself had measured all the sample pieces of this study. Result: Statistical analysis of sculptures of different centers of art from different dynasties shows a considerable difference in facial proportions and many of these proportions differ widely from the canonical proportions. The retrieved different facial proportions indicate that the definition of beauty has been changing from period to period and region to region.

Keywords: iconometry, proportions, sculptures, statistics

Procedia PDF Downloads 144
1168 Peer Corrective Feedback on Written Errors in Computer-Mediated Communication

Authors: S. H. J. Liu

Abstract:

This paper aims to explore the role of peer Corrective Feedback (CF) in improving written productions by English-as-a- foreign-language (EFL) learners who work together via Wikispaces. It attempted to determine the effect of peer CF on form accuracy in English, such as grammar and lexis. Thirty-four EFL learners at the tertiary level were randomly assigned into the experimental (with peer feedback) or the control (without peer feedback) group; each group was subdivided into small groups of two or three. This resulted in six and seven small groups in the experimental and control groups, respectively. In the experimental group, each learner played a role as an assessor (providing feedback to others), as well as an assessee (receiving feedback from others). Each participant was asked to compose his/her written work and revise it based on the feedback. In the control group, on the other hand, learners neither provided nor received feedback but composed and revised their written work on their own. Data collected from learners’ compositions and post-task interviews were analyzed and reported in this study. Following the completeness of three writing tasks, 10 participants were selected and interviewed individually regarding their perception of collaborative learning in the Computer-Mediated Communication (CMC) environment. Language aspects to be analyzed included lexis (e.g., appropriate use of words), verb tenses (e.g., present and past simple), prepositions (e.g., in, on, and between), nouns, and articles (e.g., a/an). Feedback types consisted of CF, affective, suggestive, and didactic. Frequencies of feedback types and the accuracy of the language aspects were calculated. The results first suggested that accurate items were found more in the experimental group than in the control group. Such results entail that those who worked collaboratively outperformed those who worked non-collaboratively on the accuracy of linguistic aspects. Furthermore, the first type of CF (e.g., corrections directly related to linguistic errors) was found to be the most frequently employed type, whereas affective and didactic were the least used by the experimental group. The results further indicated that most participants perceived that peer CF was helpful in improving the language accuracy, and they demonstrated a favorable attitude toward working with others in the CMC environment. Moreover, some participants stated that when they provided feedback to their peers, they tended to pay attention to linguistic errors in their peers’ work but overlook their own errors (e.g., past simple tense) when writing. Finally, L2 or FL teachers or practitioners are encouraged to employ CMC technologies to train their students to give each other feedback in writing to improve the accuracy of the language and to motivate them to attend to the language system.

Keywords: peer corrective feedback, computer-mediated communication (CMC), second or foreign language (L2 or FL) learning, Wikispaces

Procedia PDF Downloads 233
1167 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line

Procedia PDF Downloads 165
1166 Hand Motion Tracking as a Human Computer Interation for People with Cerebral Palsy

Authors: Ana Teixeira, Joao Orvalho

Abstract:

This paper describes experiments using Scratch games, to check the feasibility of employing cerebral palsy users gestures as an alternative of interaction with a computer carried out by students of Master Human Computer Interaction (HCI) of IPC Coimbra. The main focus of this work is to study the usability of a Web Camera as a motion tracking device to achieve a virtual human-computer interaction used by individuals with CP. An approach for Human-computer Interaction (HCI) is present, where individuals with cerebral palsy react and interact with a scratch game through the use of a webcam as an external interaction device. Motion tracking interaction is an emerging technology that is becoming more useful, effective and affordable. However, it raises new questions from the HCI viewpoint, for example, which environments are most suitable for interaction by users with disabilities. In our case, we put emphasis on the accessibility and usability aspects of such interaction devices to meet the special needs of people with disabilities, and specifically people with CP. Despite the fact that our work has just started, preliminary results show that, in general, computer vision interaction systems are very useful; in some cases, these systems are the only way by which some people can interact with a computer. The purpose of the experiments was to verify two hypothesis: 1) people with cerebral palsy can interact with a computer using their natural gestures, 2) scratch games can be a research tool in experiments with disabled young people. A game in Scratch with three levels is created to be played through the use of a webcam. This device permits the detection of certain key points of the user’s body, which allows to assume the head, arms and specially the hands as the most important aspects of recognition. Tests with 5 individuals of different age and gender were made throughout 3 days through periods of 30 minutes with each participant. For a more extensive and reliable statistical analysis, the number of both participants and repetitions in further investigations should be increased. However, already at this stage of research, it is possible to draw some conclusions. First, and the most important, is that simple scratch games on the computer can be a research tool that allows investigating the interaction with computer performed by young persons with CP using intentional gestures. Measurements performed with the assistance of games are attractive for young disabled users. The second important conclusion is that they are able to play scratch games using their gestures. Therefore, the proposed interaction method is promising for them as a human-computer interface. In the future, we plan to include the development of multimodal interfaces that combine various computer vision devices with other input devices improvements in the existing systems to accommodate more the special needs of individuals, in addition, to perform experiments on a larger number of participants.

Keywords: motion tracking, cerebral palsy, rehabilitation, HCI

Procedia PDF Downloads 222
1165 Gear Fault Diagnosis Based on Optimal Morlet Wavelet Filter and Autocorrelation Enhancement

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Condition monitoring is used to increase machinery availability and machinery performance, whilst reducing consequential damage, increasing machine life, reducing spare parts inventories, and reducing breakdown maintenance. An efficient condition monitoring system provides early warning of faults by predicting them at an early stage. When a localized fault occurs in gears, the vibration signals always exhibit non-stationary behavior. The periodic impulsive feature of the vibration signal appears in the time domain and the corresponding gear mesh frequency (GMF) emerges in the frequency domain. However, one limitation of frequency-domain analysis is its inability to handle non-stationary waveform signals, which are very common when machinery faults occur. Particularly at the early stage of gear failure, the GMF contains very little energy and is often overwhelmed by noise and higher-level macro-structural vibrations. An effective signal processing method would be necessary to remove such corrupting noise and interference. In this paper, a new hybrid method based on optimal Morlet wavelet filter and autocorrelation enhancement is presented. First, to eliminate the frequency associated with interferential vibrations, the vibration signal is filtered with a band-pass filter determined by a Morlet wavelet whose parameters are selected or optimized based on maximum Kurtosis. Then, to further reduce the residual in-band noise and highlight the periodic impulsive feature, an autocorrelation enhancement algorithm is applied to the filtered signal. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers induce a load on the output joint shaft flanges. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. The gearbox used for experimental measurements is of the type most commonly used in modern small to mid-sized passenger cars with transversely mounted powertrain and front wheel drive: a five-speed gearbox with final drive gear and front wheel differential. The results obtained from practical experiments prove that the proposed method is very effective for gear fault diagnosis.

Keywords: wavelet analysis, pitted gear, autocorrelation, gear fault diagnosis

Procedia PDF Downloads 375
1164 Modeling Aerosol Formation in an Electrically Heated Tobacco Product

Authors: Markus Nordlund, Arkadiusz K. Kuczaj

Abstract:

Philip Morris International (PMI) is developing a range of novel tobacco products with the potential to reduce individual risk and population harm in comparison to smoking cigarettes. One of these products is the Tobacco Heating System 2.2 (THS 2.2), (named as the Electrically Heated Tobacco System (EHTS) in this paper), already commercialized in a number of countries (e.g., Japan, Italy, Switzerland, Russia, Portugal and Romania). During use, the patented EHTS heats a specifically designed tobacco product (Electrically Heated Tobacco Product (EHTP)) when inserted into a Holder (heating device). The EHTP contains tobacco material in the form of a porous plug that undergoes a controlled heating process to release chemical compounds into vapors, from which an aerosol is formed during cooling. The aim of this work was to investigate the aerosol formation characteristics for realistic operating conditions of the EHTS as well as for relevant gas mixture compositions measured in the EHTP aerosol consisting mostly of water, glycerol and nicotine, but also other compounds at much lower concentrations. The nucleation process taking place in the EHTP during use when operated in the Holder has therefore been modeled numerically using an extended Classical Nucleation Theory (CNT) for multicomponent gas mixtures. Results from the performed simulations demonstrate that aerosol droplets are formed only in the presence of an aerosol former being mainly glycerol. Minor compounds in the gas mixture were not able to reach a supersaturated state alone and therefore could not generate aerosol droplets from the multicomponent gas mixture at the operating conditions simulated. For the analytically characterized aerosol composition and estimated operating conditions of the EHTS and EHTP, glycerol was shown to be the main aerosol former triggering the nucleation process in the EHTP. This implies that according to the CNT, an aerosol former, such as glycerol needs to be present in the gas mixture for an aerosol to form under the tested operating conditions. To assess if these conclusions are sensitive to the initial amount of the minor compounds and to include and represent the total mass of the aerosol collected during the analytical aerosol characterization, simulations were carried out with initial masses of the minor compounds increased by as much as a factor of 500. Despite this extreme condition, no aerosol droplets were generated when glycerol, nicotine and water were treated as inert species and therefore not actively contributing to the nucleation process. This implies that according to the CNT, an aerosol cannot be generated without the help of an aerosol former, from the multicomponent gas mixtures at the compositions and operating conditions estimated for the EHTP, even if all minor compounds are released or generated in a single puff.

Keywords: aerosol, classical nucleation theory (CNT), electrically heated tobacco product (EHTP), electrically heated tobacco system (EHTS), modeling, multicomponent, nucleation

Procedia PDF Downloads 256
1163 Towards a Mandatory Frame of ADR in Divorce Cases: Key Elements from a Comparative Perspective for Belgium

Authors: Celine Jaspers

Abstract:

The Belgian legal system is slowly evolving to mandatory mediation to promote ADR. One of the reasons for this evolution is the lack of use of alternative methods in relation to their possible benefits. Especially in divorce cases, ADR can play a beneficial role in resolving disputes, since the emotional component is very much present. When children are involved, a solution provided by the parent may be more adapted to the child’s best interest than a court order. In the first part, the lack of use of voluntary ADR and the evolution toward mandatory ADR in Belgium will be indicated by sources of legislation, jurisprudence and social-scientific sources, with special attention to divorce cases. One of the reasons is lack of knowledge on ADR, despite the continuing efforts of the Belgian legislator to promote ADR. One of the last acts of ADR-promotion, was the implementation of an Act in 2018 which gives the judge the possibility to refer parties to mediation if at least one party wants to during the judicial procedure. This referral is subject to some conditions. The parties will be sent to a private mediator, recognized by the Federal Mediation Commission, to try to resolve their conflict. This means that at least one party can be mandated to try mediation (indicated as “semi-mandatory mediation”). The main goal is to establish the factors and elements that Belgium has to take into account in their further development of mandatory ADR, with consideration of the human rights perspective and the EU perspective. Furthermore it is also essential to detect some dangerous pitfalls other systems have encountered with their process design. Therefore, the second part, the comparative component, will discuss the existing framework in California, USA to establish the necessary elements, possible pitfalls and considerations the Belgian legislator can take into account when further developing the framework of mandatory ADR. The contrasting and functional method will be used to create key elements and possible pitfalls, to help Belgium improve its existing framework. The existing mandatory system in California has been in place since 1981 and is still up and running, and can thus provide valuable lessons and considerations for the Belgian system. Thirdly, the key elements from a human rights perspective and from a European Union perspective (e.g. the right to access to a judge, the right to privacy) will be discussed too, since the basic human rights and European legislation and jurisprudence play a significant part in Belgian legislation as well. The main sources for this part will be the international and European treaties, legislation, jurisprudence and soft law. In the last and concluding part, the paper will list the most important elements of a mandatory ADR-system design with special attention to the dangers of these elements (e.g. to include or exclude domestic violence cases in the mandatory ADR-framework and the consequences thereof), and with special attention for the necessary the international and European rights, prohibitions and guidelines.

Keywords: Belgium, divorce, framework, mandatory ADR

Procedia PDF Downloads 131
1162 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market

Authors: Byomakesh Debata, Jitendra Mahakud

Abstract:

The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.

Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model

Procedia PDF Downloads 361
1161 Multi-Scale Spatial Difference Analysis Based on Nighttime Lighting Data

Authors: Qinke Sun, Liang Zhou

Abstract:

The ‘Dragon-Elephant Debate’ between China and India is an important manifestation of global multipolarity in the 21st century. The two rising powers have carried out economic reforms one after another in the interval of more than ten years, becoming the fastest growing developing country and emerging economy in the world. At the same time, the development differences between China and India have gradually attracted wide attention of scholars. Based on the continuous annual night light data (DMSP-OLS) from 1992 to 2012, this paper systematically compares and analyses the regional development differences between China and India by Gini coefficient, coefficient of variation, comprehensive night light index (CNLI) and hot spot analysis. The results show that: (1) China's overall expansion from 1992 to 2012 is 1.84 times that of India, in which China's change is 2.6 times and India's change is 2 times. The percentage of lights in unlighted areas in China dropped from 92% to 82%, while that in India from 71% to 50%. (2) China's new growth-oriented cities appear in Hohhot, Inner Mongolia, Ordos, and Urumqi in the west, and the declining cities are concentrated in Liaoning Province and Jilin Province in the northeast; India's new growth-oriented cities are concentrated in Chhattisgarh in the north, while the declining areas are distributed in Uttar Pradesh. (3) China's differences on different scales are lower than India's, and regional inequality of development is gradually narrowing. Gini coefficients at the regional and provincial levels have decreased from 0.29, 0.44 to 0.24 and 0.38, respectively, while regional inequality in India has slowly improved and regional differences are gradually widening, with Gini coefficients rising from 0.28 to 0.32. The provincial Gini coefficient decreased slightly from 0.64 to 0.63. (4) The spatial pattern of China's regional development is mainly east-west difference, which shows the difference between coastal and inland areas; while the spatial pattern of India's regional development is mainly north-south difference, but because the southern states are sea-dependent, it also reflects the coastal inland difference to a certain extent. (5) Beijing and Shanghai present a multi-core outward expansion model, with an average annual CNLI higher than 0.01, while New Delhi and Mumbai present the main core enhancement expansion model, with an average annual CNLI lower than 0.01, of which the average annual CNLI in Shanghai is about five times that in Mumbai.

Keywords: spatial pattern, spatial difference, DMSP-OLS, China, India

Procedia PDF Downloads 139