Search results for: informational efficiency theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11006

Search results for: informational efficiency theory

956 Evaluation of Health Services after Emergency Decrees in Turkey

Authors: Sengul Celik, Alper Ketenci

Abstract:

In Turkish Constitution about health care in Article 56, it is said that: everyone has the right to live in a healthy and balanced environment. It is the duty of the state and citizens to improve the environment, protect environmental health, and prevent environmental pollution. The state ensures that everyone lives their lives in physical and mental health; it organizes the planning and service of health institutions from a single source in order to realize cooperation by increasing savings and efficiency in human and substance power. The state fulfills this task by utilizing and supervising health and social institutions in the public and private sectors. General health insurance can be established by law for the widespread delivery of health services. To have health care is one of the basic rights of patients. After the coupe attempt in July 2016, the Government of Turkey has announced a state of emergency and issued lots of emergency decrees. By these emergency decrees, lots of people were dismissed from their jobs and lost their some basic social rights. The violations occur in social life. One of the most common observations is the discrimination by government in health care system. This study aims to put forward the violation of human rights in health care system in Turkey due to their discriminated position by an emergency decree. The study is a case study that is based on nine interviews with the people or relatives of people who lost their jobs by an emergency decree in Turkey. In this study, no personally identifiable information was obtained for the safety of individuals. Also no distinctive questions regarding the identity of individuals were asked. The interviews are obtained through internet call applications. The data were analyzed through the requirements of regular health care system in Turkey. The interviews expose that the people or the relatives of people lost their right to have regular health care. They have to pay extra amount both in clinical services and in medication treatment. The patient right to quality medical care without prejudice is violated. It was assessed that the people who are involved in emergency decree and their relatives are discriminated by government and deprived of regular medical care and supervision. Although international legal arrangements and legal responsibilities of the state have been put forward by Article 56, they are violated in practice. To prevent these kinds of violations, some measures should be taken against the deprivation in health care system especially towards the discriminated people by an emergency decree.

Keywords: emergency decree in Turkey, health care, discriminated people, patients rights

Procedia PDF Downloads 106
955 Prediction of Pile-Raft Responses Induced by Adjacent Braced Excavation in Layered Soil

Authors: Linlong Mu, Maosong Huang

Abstract:

Considering excavations in urban areas, the soil deformation induced by the excavations usually causes damage to the surrounding structures. Displacement control becomes a critical indicator of foundation design in order to protect the surrounding structures. Evaluation, the damage potential of the surrounding structures induced by the excavations, usually depends on the finite element method (FEM) because of the complexity of the excavation and the variety of the surrounding structures. Besides, evaluation the influence of the excavation on surrounding structures is a three-dimensional problem. And it is now well recognized that small strain behaviour of the soil influences the responses of the excavation significantly. Three-dimensional FEM considering small strain behaviour of the soil is a very complex method, which is hard for engineers to use. Thus, it is important to obtain a simplified method for engineers to predict the influence of the excavations on the surrounding structures. Based on large-scale finite element calculation with small-strain based soil model coupling with inverse analysis, an empirical method is proposed to calculate the three-dimensional soil movement induced by braced excavation. The empirical method is able to capture the small-strain behaviour of the soil. And it is suitable to be used in layered soil. Then the free-field soil movement is applied to the pile to calculate the responses of the pile in both vertical and horizontal directions. The asymmetric solutions for problems in layered elastic half-space are employed to solve the interactions between soil points. Both vertical and horizontal pile responses are solved through finite difference method based on elastic theory. Interactions among the nodes along a single pile, pile-pile interactions, pile-soil-pile interaction action and soil-soil interactions are counted to improve the calculation accuracy of the method. For passive piles, the shadow effects are also calculated in the method. Finally, the restrictions of the raft on the piles and the soils are summarized as: (1) the summations of the internal forces between the elements of the raft and the elements of the foundation, including piles and soil surface elements, is equal to 0; (2) the deformations of pile heads or of the soil surface elements are the same as the deformations of the corresponding elements of the raft. Validations are carried out by comparing the results from the proposed method with the results from the model tests, FEM and other existing literatures. From the comparisons, it can be seen that the results from the proposed method fit with the results from other methods very well. The method proposed herein is suitable to predict the responses of the pile-raft foundation induced by braced excavation in layered soil in both vertical and horizontal directions when the deformation is small. However, more data is needed to verify the method before it can be used in practice.

Keywords: excavation, pile-raft foundation, passive piles, deformation control, soil movement

Procedia PDF Downloads 226
954 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes

Authors: David S. Byrne

Abstract:

The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.

Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations

Procedia PDF Downloads 10
953 Mobile Learning and Student Engagement in English Language Teaching: The Case of First-Year Undergraduate Students at Ecole Normal Superieur, Algeria

Authors: I. Tiahi

Abstract:

The aim of the current paper is to explore educational practices in contemporary Algeria. Researches explain such practices bear traditional approach and the overlooks modern teaching methods such as mobile learning. That is why the research output of examining student engagement in respect of mobile learning was obtained from the following objectives: (1) To evaluate the current practice of English language teaching within Algerian higher education institutions, (2) To explore how social constructivism theory and m-learning help students’ engagement in the classroom and (3) To explore the feasibility and acceptability of m-learning amongst institutional leaders. The methodology underpins a case study and action research. For the case study, the researcher engaged with 6 teachers, 4 institutional leaders, and 30 students subjected for semi-structured interviews and classroom observations to explore the current teaching methods for English as a foreign language. For the action research, the researcher applied an intervention course to investigate the possibility and implications for future implementation of mobile learning in higher education institutions. The results were deployed using thematic analysis. The research outcome showed that the disengagement of students in English language learning has many aspects. As seen from the interviews from the teachers, the researcher found that they do not have enough resources except for using ppt for some teacher. According to them, the teaching method they are using is mostly communicative and competency-based approach. Teachers informed that students are disengaged because they have psychological barriers. In classroom setting, the students are conscious about social approval from the peer, and thus if they are to face negative reinforcement which would damage their image, it is seen as a preventive mechanism to be scared of committing mistakes. This was also very reflective in this finding. A lot of other arguments can be given for this claim; however, in Algerian setting, it is usual practice where teachers do not provide positive reinforcement which is open up students for possible learning. Thus, in order to overcome such a psychological barrier, proper measures can be taken. On a conclusive remark, it is evident that teachers, students, and institutional leaders provided positive feedback for using mobile learning. It is not only motivating but also engaging in learning processes. Apps such as Kahoot, Padlet and Slido were well received and thus can be taken further to examine its higher impact in Algerian context. Thus, in the future, it will be important to implement m-learning effectively in higher education to transform the current traditional practices into modern, innovative and active learning. Persuasion for this change for stakeholder may be challenging; however, its long-term benefits can be reflective from the current research paper.

Keywords: Algerian context, mobile learning, social constructivism, student engagement

Procedia PDF Downloads 135
952 The Impacts of Export in Stimulating Economic Growth in Ethiopia: ARDL Model Analysis

Authors: Natnael Debalklie Teshome

Abstract:

The purpose of the study was to empirically investigate the impacts of export performance and its volatility on economic growth in the Ethiopian economy. To do so, time-series data of the sample period from 1974/75 – 2017/18 were collected from databases and annual reports of IMF, WB, NBE, MoFED, UNCTD, and EEA. The extended Cobb-Douglas production function of the neoclassical growth model framed under the endogenous growth theory was used to consider both the performance and instability aspects of export. First, the unit root test was conducted using ADF and PP tests, and data were found in stationery with a mix of I(0) and I(1). Then, the bound test and Wald test were employed, and results showed that there exists long-run co-integration among study variables. All the diagnostic test results also reveal that the model fulfills the criteria of the best-fitted model. Therefore, the ARDL model and VECM were applied to estimate the long-run and short-run parameters, while the Granger causality test was used to test the causality between study variables. The empirical findings of the study reveal that only export and coefficient of variation had significant positive and negative impacts on RGDP in the long run, respectively, while other variables were found to have an insignificant impact on the economic growth of Ethiopia. In the short run, except for gross capital formation and coefficients of variation, which have a highly significant positive impact, all other variables have a strongly significant negative impact on RGDP. This shows exports had a strong, significant impact in both the short-run and long-run periods. However, its positive and statistically significant impact is observed only in the long run. Similarly, there was a highly significant export fluctuation in both periods, while significant commodity concentration (CCI) was observed only in the short run. Moreover, the Granger causality test reveals that unidirectional causality running from export performance to RGDP exists in the long run and from both export and RGDP to CCI in the short run. Therefore, the export-led growth strategy should be sustained and strengthened. In addition, boosting the industrial sector is vital to bring structural transformation. Hence, the government has to give different incentive schemes and supportive measures to exporters to extract the spillover effects of exports. Greater emphasis on price-oriented diversification and specialization on major primary products that the country has a comparative advantage should also be given to reduce value-based instability in the export earnings of the country. The government should also strive to increase capital formation and human capital development via enhancing investments in technology and quality of education to accelerate the economic growth of the country.

Keywords: export, economic growth, export diversification, instability, co-integration, granger causality, Ethiopian economy

Procedia PDF Downloads 71
951 Scaling out Sustainable Land Use Systems in Colombia: Some Insights and Implications from Two Regional Case Studies

Authors: Martha Lilia Del Rio Duque, Michelle Bonatti, Katharina Loehr, Marcos Lana, Tatiana Rodriguez, Stefan Sieber

Abstract:

Nowadays, most agricultural practices can reduce the ability of ecosystems to provide goods and services. To enhance environmentally friendly food production and to maximize social and economic benefits, sustainable land use systems (SLUS) are one of the most critical strategies increasingly/strongly promoted by donors organizations, international agencies, and policymakers. This process involves the question of how SLUS can be scaled out also large-scale landscapes and not merely isolated experiments. As SLUS are context-specific strategies, diffusion and replication of successful SLUS in Colombia required the identification of main factors that facilitate this scaling out process. We applied a case study approach to investigate the scaling out process of SLUS in cocoa and livestock sector within peacebuilding territories in Colombia, specifically, in Cesar and Caqueta region. These two regions are contrasting, but both have a current trend of increasing land degradation. Presently in Colombia, Caqueta is one of the most deforested departments, and Cesar has some most degraded soils. Following a qualitative research approach, 19 semi-structured interviews and 2 focus groups were conducted with agroforestry experts in both regions to analyze (1) what does it mean a sustainable land use system in Cocoa/Livestock, specifically in Caqueta or Cesar and (2) to identify the key elements at the level of the following dimensions: biophysical, economic and profitability, market, social, policy and institutions that can explain how and why SLUS are replicated and spread among more producers. The Interviews were coded and analyzed using MAXQDA to identify, analyze and report patterns (themes) within data. As the results show, key themes, among which: premium market, solid regional markets and price stability, water availability and management, generational renewal, land use knowledge and diversification, producer organization and certifications are crucial to understand how the SLUS can have an impact across large-scale landscapes and how the scaling out process can be set up best in order to be successful across different contexts. The analysis further reveals which key factors might affect SLUS efficiency.

Keywords: agroforestry, cocoa sector, Colombia, livestock sector, sustainable land use system

Procedia PDF Downloads 155
950 Integration of the Electro-Activation Technology for Soy Meal Valorization

Authors: Natela Gerliani, Mohammed Aider

Abstract:

Nowadays, the interest of using sustainable technologies for protein extraction from underutilized oilseeds is growing. Currently, a major disposal problem for the oil industry is by-products of plant food processing such as soybean meal. That is why valorization of soybean meal is important for the oil industry since it contains high-quality proteins and other valuable components. Generally, soybean meal is used in livestock and poultry feed but is rarely used in human feed. Though chemical composition of this meal compensate nutritional deficiency and can be used to balance protein in human food. Regarding the efficiency of soybean meal valorization, extraction is a key process for obtaining enriched protein ingredient, which can be incorporated into the food matrix. However, most of the food components such as proteins extracted from oilseeds by-products imply the utilization of organic and inorganic chemicals (e.g. acids, bases, TCA-acetone) having a significant environmental impact. In a context of sustainable production, the use of an electro-activation technology seems to be a good alternative. Indeed, the electro-activation technology requires only water, food grade salt and electricity as main materials. Moreover, this innovative technology helps to avoid special equipment and trainings for workers safety as well as transport and storage of hazardous materials. Electro-activation is a technology based on applied electrochemistry for the generation of acidic and alkaline solutions on the basis of the oxidation-reduction reactions that occur at the vicinity electrode/solution interfaces. It is an eco-friendly process that can be used to replace the conventional acidic and alkaline extraction. In this research, the electro-activation technology for protein extraction from soybean meal was carried out in the electro-activation reactor. This reactor consists of three compartments separated by cation and anion exchange membranes that allow creating non-contacting acidic and basic solutions. Different current intensities (150 mA, 300 mA and 450 mA) and treatment durations (10 min, 30 min and 50 min) were tested. The results showed that the extracts obtained by the electro-activation method have good quality in comparison to conventional extracts. For instance, extractability obtained with electro-activation method was 55% whereas with the conventional method it was only 36%. Moreover, a maximum protein quantity of 48 % in the extract was obtained with the electro-activation technology comparing to the maximum amount of protein obtained by conventional extraction of 41 %. Hence, the environmentally sustainable electro-activation technology seems to be a promising type of protein extraction that can replace conventional extraction technology.

Keywords: by-products, eco-friendly technology, electro-activation, soybean meal

Procedia PDF Downloads 223
949 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level

Authors: M. A. Spielmann, L. Schebek

Abstract:

In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.

Keywords: building sector, economic-ecological assessment, heat, LCA, quarter level

Procedia PDF Downloads 221
948 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era

Authors: Cagri Baris Kasap

Abstract:

In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.

Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking

Procedia PDF Downloads 140
947 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 187
946 The Impact of Roof Thermal Performance on the Indoor Thermal Comfort in a Natural Ventilated Building Envelope in Hot Climatic Climates

Authors: J. Iwaro, A. Mwasha, K. Ramsubhag

Abstract:

Global warming has become a threat of our time. It poses challenges to the existence of beings on earth, the built environment, natural environment and has made a clear impact on the level of energy and water consumption. As such, increase in the ambient temperature increases indoor and outdoor temperature level of the buildings which brings about the use of more energy and mechanical air conditioning systems. In addition, in view of the increased modernization and economic growth in the developing countries, a significant amount of energy is being used, especially those with hot climatic conditions. Since modernization in developing countries is rising rapidly, more pressure is being placed on the buildings and energy resources to satisfy the indoor comfort requirements. This paper presents a sustainable passive roof solution as a means of reducing energy cooling loads for satisfying human comfort requirements in a hot climate. As such, the study based on the field study data discusses indoor thermal roof design strategies for a hot climate by investigating the impacts of roof thermal performance on indoor thermal comfort in naturally ventilated building envelope small scaled structures. In this respect, the traditional concrete flat roof, corrugated galvanised iron roof and pre-painted standing seam roof were used. The experiment made used of three identical small scale physical models constructed and sited on the roof of a building at the University of the West Indies. The results show that the utilization of insulation in traditional roofing systems will significantly reduce heat transfer between the internal and ambient environment, thus reducing the energy demand of the structure and the relative carbon footprint of a structure per unit area over its lifetime. Also, the application of flat slab concrete roofing system showed the best performance as opposed to the metal roof sheeting alternative systems. In addition, it has been shown experimentally through this study that a sustainable passive roof solution such as insulated flat concrete roof in hot dry climate has a better cooling strength that can provide building occupant with a better thermal comfort, conducive indoor conditions and energy efficiency.

Keywords: building envelope, roof, energy consumption, thermal comfort

Procedia PDF Downloads 266
945 Examining Kokugaku as a Pattern of Defining Identity in Global Comparison

Authors: Mária Ildikó Farkas

Abstract:

Kokugaku of the Edo period can be seen as a key factor of defining cultural (and national) identity in the 18th and early 19th century based on Japanese cultural heritage. Kokugaku focused on Japanese classics, on exploring, studying and reviving (or even inventing) ancient Japanese language, literature, myths, history and also political ideology. ‘Japanese culture’ as such was distinguished from Chinese (and all other) cultures, ‘Japanese identity’ was thus defined. Meiji scholars used kokugaku conceptions of Japan to construct a modern national identity based on the premodern and culturalist conceptions of community. The Japanese cultural movement of the 18-19th centuries (kokugaku) of defining cultural and national identity before modernization can be compared not to the development of Western Europe (where national identity strongly attached to modern nation states) or other parts of Asia (where these emerged after the Western colonization), but rather with the ‘national awakening’ movements of the peoples of East Central Europe, a comparison which have not been dealt with in the secondary literature yet. The role of a common language, culture, history and myths in the process of defining cultural identity – following mainly Miroslav Hroch’s comparative and interdisciplinary theory of national development – can be examined compared to the movements of defining identity of the peoples of East Central Europe (18th-19th c). In the shadow of a cultural and/or political ‘monolith’ (China for Japan and Germany for Central Europe), before modernity, ethnic groups or communities started to evolve their own identities with cultural movements focusing on their own language and culture, thus creating their cultural identity, and in the end, a new sense of community, the nation. Comparing actual texts (‘narratives’) of the kokugaku scholars and Central European writers of the nation building period (18th and early 19th centuries) can reveal the similarities of the discourses of deliberate searches for identity. Similar motives of argument can be identified in these narratives: ‘language’ as the primary bearer of collective identity, the role of language in culture, ‘culture’ as the main common attribute of the community; and similar aspirations to explore, search and develop native language, ‘genuine’ culture, ‘original’ traditions. This comparative research offering ‘development patterns’ for interpretation can help us understand processes that may be ambiguously considered ‘backward’ or even ‘deleterious’ (e.g. cultural nationalism) or just ‘unique’. ‘Cultural identity’ played a very important role in the formation of national identity during modernization especially in the case of non-Western communities, who had to face the danger of losing their identities in the course of ‘Westernization’ accompanying modernization.

Keywords: cultural identity, Japanese modernization, kokugaku, national awakening

Procedia PDF Downloads 264
944 A Critical Discourse Analysis of Corporate Annual Reports in a Cross-Cultural Perspective: Views from Grammatical Metaphor and Systemic Functional Linguistics

Authors: Antonio Piga

Abstract:

The study of language strategies in financial and corporate discourse has always been vital for understanding how companies manage to communicate effectively with a wider customer base and offers new perspectives on how companies interact with key stakeholders, not only to convey transparency and an image of trustworthiness, but also to create affiliation and attract investment. In the light of Systemic Functional Linguistics, the purpose of this study is to examine and analyse the annual reports of Asian and Western joint-stock companies involved in oil refining and power generation from the point of view of the functions and frequency of grammatical metaphors. More specifically, grammatical metaphor - through the lens of Critical Discourse Analysis (CDA) - is used as a theoretical tool for analysing a synchronic cross-cultural study of the communicative strategies adopted by Asian and Western companies to communicate social and environmental sustainability and showcase their ethical values, performance and competitiveness to local and global communities and key stakeholders. According to Systemic Functional Linguistics, grammatical metaphor can be divided into two broad areas: ideational and interpersonal. This study focuses on the first type, ideational grammatical metaphor (IGM), which includes de-adjectival and de-verbal nominalisation. The dominant and more effective grammatical tropes used by Asian and Western corporations in their annual reports were examined from both a qualitative and quantitative perspective. The aim was to categorise and explain how ideational grammatical metaphor is constructed cross-culturally and presented through structural language patterns involving re-mapping between semantics and lexico-grammatical features. The results show that although there seem to be more differences than similarities in terms of the categorisation of the ideational grammatical metaphors conceptualised in the two case studies analysed, there are more similarities than differences in terms of the occurrence, the congruence of process types and the role and function of IGM. Through the immediacy and essentialism of compacting and condensing information, IGM seems to be an important linguistic strategy adopted in the rhetoric of corporate annual reports, contributing to the ideologies and actions of companies to report and promote efficiency, profit and social and environmental sustainability, thus advocating the engagement and investment of key stakeholders.

Keywords: corporate annual reports, cross-cultural perspective, ideational grammatical metaphor, rhetoric, systemic functional linguistics

Procedia PDF Downloads 36
943 Family Cohesion, Social Networks, and Cultural Differences in Latino and Asian American Help Seeking Behaviors

Authors: Eileen Y. Wong, Katherine Jin, Anat Talmon

Abstract:

Background: Help seeking behaviors are highly contingent on socio-cultural factors such as ethnicity. Both Latino and Asian Americans underutilize mental health services compared to their White American counterparts. This difference may be related to the composite of one’s social support system, which includes family cohesion and social networks. Previous studies have found that Latino families are characterized by higher levels of family cohesion and social support, and Asian American families with greater family cohesion exhibit lower levels of help seeking behaviors. While both are broadly considered collectivist communities, within-culture variability is also significant. Therefore, this study aims to investigate the relationship between help seeking behaviors in the two cultures with levels of family cohesion and strength of social network. We also consider such relationships in light of previous traumatic events and diagnoses, particularly post-traumatic stress disorder (PTSD), to understand whether clinically diagnosed individuals differ in their strength of network and help seeking behaviors. Method: An adult sample (N = 2,990) from the National Latino and Asian American Study (NLAAS) provided data on participants’ social network, family cohesion, likelihood of seeking professional help, and DSM-IV diagnoses. T-tests compared Latino American (n = 1,576) and Asian American respondents (n = 1,414) in strength of social network, level of family cohesion, and likelihood of seeking professional help. Linear regression models were used to identify the probability of help-seeking behavior based on ethnicity, PTSD diagnosis, and strength of social network. Results: Help-seeking behavior was significantly associated with family cohesion and strength of social network. It was found that higher frequency of expressing one’s feelings with family significantly predicted lower levels of help-seeking behaviors (β = [-.072], p = .017), while higher frequency of spending free time with family significantly predicted higher levels of help-seeking behaviors (β = [.129], p = .002) in the Asian American sample. Subjective importance of family relations compared to that of one’s peers also significantly predict higher levels of help-seeking behaviors (β = [.095], p = .011) in the Asian American sample. Frequency of sharing one’s problems with relatives significantly predicted higher levels of help-seeking behaviors (β = [.113], p < .01) in the Latino American sample. A PTSD diagnosis did not have any significant moderating effect. Conclusion: Considering the underutilization of mental health services in Latino and Asian American minority groups, it is crucial to understand ways in which help seeking behavior can be encouraged. Our findings suggest that different dimensions within family cohesion and social networks have differential impacts on help-seeking behavior. Given the multifaceted nature of family cohesion and cultural relevance, the implications of our findings for theory and practice will be discussed.

Keywords: family cohesion, social networks, Asian American, Latino American, help-seeking behavior

Procedia PDF Downloads 63
942 Advanced Separation Process of Hazardous Plastics and Metals from End-Of-Life Vehicles Shredder Residue by Nanoparticle Froth Flotation

Authors: Srinivasa Reddy Mallampati, Min Hee Park, Soo Mim Cho, Sung Hyeon Yoon

Abstract:

One of the issues of End of Life Vehicles (ELVs) recycling promotion is technology for the appropriate treatment of automotive shredder residue (ASR). Owing to its high heterogeneity and variable composition (plastic (23–41%), rubber/elastomers (9–21%), metals (6–13%), glass (10–20%) and dust (soil/sand) etc.), ASR can be classified as ‘hazardous waste’, on the basis of the presence of heavy metals (HMs), PCBs, BFRs, mineral oils, etc. Considering their relevant concentrations, these metals and plastics should be properly recovered for recycling purposes before ASR residues are disposed of. Brominated flame retardant additives in ABS/HIPS and PVC may generate dioxins and furans at elevated temperatures. Moreover, these BFRs additives present in plastic materials may leach into the environment during landfilling operations. ASR thermal process removes some of the organic material but concentrates, the heavy metals and POPs present in the ASR residues. In the present study, Fe/Ca/CaO nanoparticle assisted ozone treatment has been found to selectively hydrophilize the surface of ABS/HIPS and PVC plastics, enhancing its wettability and thereby promoting its separation from ASR plastics by means of froth flotation. The water contact angles, of ABS/HIPS and PVC decreased, about 18.7°, 18.3°, and 17.9° in ASR respectively. Under froth flotation conditions at 50 rpm, about 99.5% and 99.5% of HIPS in ASR samples sank, resulting in a purity of 98% and 99%. Furthermore, at 150 rpm a 100% PVC separation in the settled fraction, with 98% of purity in ASR, respectively. Total recovery of non-ABS/HIPS and PVC plastics reached nearly 100% in the floating fraction. This process improved the quality of recycled ASR plastics by removing surface contaminants or impurities. Further, a hybrid ball-milling and with Fe/Ca/CaO nanoparticle froth flotation process was established for the recovery of HMs from ASR. After ball-milling with Fe/Ca/CaO nanoparticle additives, the flotation efficiency increased to about 55 wt% and the HMs recovery were also increased about 90% for the 0.25 mm size fractions of ASR. Coating with Fe/Ca/CaO nanoparticles associated with subsequent microbubble froth flotation allowed the air bubbles to attach firmly on the HMs. SEM–EDS maps showed that the amounts of HMs were significant on the surface of the floating ASR fraction. This result, along with the low HM concentration in the settled fraction, was confirmed by elemental spectra and semi-quantitative SEM–EDS analysis. Developed hybrid preferential hazardous plastics and metals separation process from ASR is a simple, highly efficient, and sustainable procedure.

Keywords: end of life vehicles shredder residue, hazardous plastics, nanoparticle froth flotation, separation process

Procedia PDF Downloads 274
941 Mapping the Early History of Common Law Education in England, 1292-1500

Authors: Malcolm Richardson, Gabriele Richardson

Abstract:

This paper illustrates how historical problems can be studied successfully using GIS even in cases in which data, in the modern sense, is fragmentary. The overall problem under investigation is how early (1300-1500) English schools of Common Law moved from apprenticeship training in random individual London inns run in part by clerks of the royal chancery to become what is widely called 'the Third University of England,' a recognized system of independent but connected legal inns. This paper focuses on the preparatory legal inns, called the Inns of Chancery, rather than the senior (and still existing) Inns of Court. The immediate problem studied in this paper is how the junior legal inns were organized, staffed, and located from 1292 to about 1500, and what maps tell us about the role of the chancery clerks as managers of legal inns. The authors first uncovered the names of all chancery clerks of the period, most of them unrecorded in histories, from archival sources in the National Archives, Kew. Then they matched the names with London property leases. Using ArcGIS, the legal inns and their owners were plotted on a series of maps covering the period 1292 to 1500. The results show a distinct pattern of ownership of the legal inns and suggest a narrative that would help explain why the Inns of Chancery became serious centers of learning during the fifteenth century. In brief, lower-ranking chancery clerks, always looking for sources of income, discovered by 1370 that legal inns could be a source of income. Since chancery clerks were intimately involved with writs and other legal forms, and since the chancery itself had a long-standing training system, these clerks opened their own legal inns to train fledgling lawyers, estate managers, and scriveners. The maps clearly show growth patterns of ownership by the chancery clerks for both legal inns and other London properties in the areas of Holborn and The Strand between 1450 and 1417. However, the maps also show that a royal ordinance of 1417 forbidding chancery clerks to live with lawyers, law students, and other non-chancery personnel had an immediate effect, and properties in that area of London leased by chancery clerks simply stop after 1417. The long-term importance of the patterns shown in the maps is that while the presence of chancery clerks in the legal inns likely created a more coherent education system, their removal forced the legal profession, suddenly without a hostelry managerial class, to professionalize the inns and legal education themselves. Given the number and social status of members of the legal inns, the effect on English education was to free legal education from the limits of chancery clerk education (the clerks were not practicing common lawyers) and to enable it to become broader in theory and practice, in fact, a kind of 'finishing school' for the governing (if not noble) class.

Keywords: GIS, law, London, education

Procedia PDF Downloads 171
940 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method

Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon

Abstract:

The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.

Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue

Procedia PDF Downloads 245
939 Influencing Factors and Mechanism of Patient Engagement in Healthcare: A Survey in China

Authors: Qing Wu, Xuchun Ye, Kirsten Corazzini

Abstract:

Objective: It is increasingly recognized that patients’ rational and meaningful engagement in healthcare could make important contributions to their health care and safety management. However, recent evidence indicated that patients' actual roles in healthcare didn’t match their desired roles, and many patients reported a less active role than desired, which suggested that patient engagement in healthcare may be influenced by various factors. This study aimed to analyze influencing factors on patient engagement and explore the influence mechanism, which will be expected to contribute to the strategy development of patient engagement in healthcare. Methods: On the basis of analyzing the literature and theory study, the research framework was developed. According to the research framework, a cross-sectional survey was employed using the behavior and willingness of patient engagement in healthcare questionnaire, Chinese version All Aspects of Health Literacy Scale, Facilitation of Patient Involvement Scale and Wake Forest Physician Trust Scale, and other influencing factor related scales. A convenience sample of 580 patients was recruited from 8 general hospitals in Shanghai, Jiangsu Province, and Zhejiang Province. Results: The results of the cross-sectional survey indicated that the mean score for the patient engagement behavior was (4.146 ± 0.496), and the mean score for the willingness was (4.387 ± 0.459). The level of patient engagement behavior was inferior to their willingness to be involved in healthcare (t = 14.928, P < 0.01). The influencing mechanism model of patient engagement in healthcare was constructed by the path analysis. The path analysis revealed that patient attitude toward engagement, patients’ perception of facilitation of patient engagement and health literacy played direct prediction on the patients’ willingness of engagement, and standard estimated values of path coefficient were 0.341, 0.199, 0.291, respectively. Patients’ trust in physician and the willingness of engagement played direct prediction on the patient engagement, and standard estimated values of path coefficient were 0.211, 0.641, respectively. Patient attitude toward engagement, patients’ perception of facilitation and health literacy played indirect prediction on patient engagement, and standard estimated values of path coefficient were 0.219, 0.128, 0.187, respectively. Conclusions: Patients engagement behavior did not match their willingness to be involved in healthcare. The influencing mechanism model of patient engagement in healthcare was constructed. Patient attitude toward engagement, patients’ perception of facilitation of engagement and health literacy posed indirect positive influence on patient engagement through the patients’ willingness of engagement. Patients’ trust in physician and the willingness of engagement had direct positive influence on the patient engagement. Patient attitude toward engagement, patients’ perception of physician facilitation of engagement and health literacy were the factors influencing the patients’ willingness of engagement. The results of this study provided valuable evidence on guiding the development of strategies for promoting patient rational and meaningful engagement in healthcare.

Keywords: healthcare, patient engagement, influencing factor, the mechanism

Procedia PDF Downloads 152
938 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 149
937 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport

Authors: Surupa Shaw, Debjyoti Banerjee

Abstract:

Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.

Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement

Procedia PDF Downloads 317
936 Inter-Personal and Inter-Organizational Relationships in Supply Chain Integration: A Resource Orchestration Perspective

Authors: Bill Wang, Paul Childerhouse, Yuanfei Kang

Abstract:

Purpose: The research is to extend resource orchestration theory (ROT) into supply chain management (SCM) area to investigate the dyadic relationships at both individual and organizational levels in supply chain integration (SCI). Also, we try to explore the interaction mechanism between inter-personal relationships (IPRs) and inter-organizational (IORs) during the whole SCI process. Methodology/approach: The research employed an exploratory multiple case study approach of four New Zealand companies. The data was collected via semi-structured interviews with top, middle, and lower level managers and operators from different departments of both suppliers and customers triangulated with company archival data. Findings: The research highlights the important role of both IPRs and IORs in the whole SCI process. Both IPRs and IORs are valuable, inimitable resources but IORs are formal and exterior while IPRs are informal and subordinated. In the initial stage of SCI process, IPRs are seen as key resources antecedents to IOR building while three IPRs dimensions work differently: personal credibility acts as an icebreaker to strengthen the confidence forming IORs, and personal affection acts as a gatekeeper, whilst personal communication expedites the IORs process. In the maintenance and development stage, IORs and IPRs interact each other continuously: good interaction between IPRs and IORs can facilitate SCI process while the bad interaction between IPRs can damage the SCI process. On the other hand, during the life-cycle of SCI process, IPRs can facilitate the formation, development of IORs while IORs development can cultivate the ties of IPRs. Out of the three dimensions of IPRs, Personal communication plays a more important role to develop IORs than personal credibility and personal affection. Originality/value: This research contributes to ROT in supply chain management literature by highlighting the interaction of IPRs and IORs in SCI. The intangible resources and capabilities of three dimensions of IPRs need to be orchestrated and nurtured to achieve efficient and effective IORs in SCI. Also, IPRs and IORs need to be orchestrated in terms of breadth, depth, and life-cycle of whole SCI process. Our study provides further insight into the rarely explored inter-personal level of SCI. Managerial implications: Our research provides top management with further evidence of the significance roles of IPRs at different levels when working with trading partners. This highlights the need to actively manage and develop these soft IPRs skills as an intangible competitive resource. Further, the research identifies when staff with specific skills and connections should be utilized during the different stages of building and maintaining inter-organizational ties. More importantly, top management needs to orchestrate and balance the resources of IPRs and IORs.

Keywords: case study, inter-organizational relationships, inter-personal relationships, resource orchestration, supply chain integration

Procedia PDF Downloads 231
935 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes

Authors: Chih Ming Tseng

Abstract:

Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.

Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation

Procedia PDF Downloads 18
934 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates

Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau

Abstract:

There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.

Keywords: duration estimates, long durations, passage of time judgements, task demands

Procedia PDF Downloads 128
933 Genome-Wide Mining of Potential Guide RNAs for Streptococcus pyogenes and Neisseria meningitides CRISPR-Cas Systems for Genome Engineering

Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii

Abstract:

Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated protein (Cas) system can facilitate targeted genome editing in organisms. Dual or single guide RNA (gRNA) can program the Cas9 nuclease to cut target DNA in particular areas; thus, introducing concise mutations either via error-prone non-homologous end-joining repairing or via incorporating foreign DNAs by homologous recombination between donor DNA and target area. In spite of high demand of such promising technology, developing a well-organized procedure in order for reliable mining of potential target sites for gRNAs in large genomic data is still challenging. Hence, we aimed to perform high-throughput detection of target sites by specific PAMs for not only common Streptococcus pyogenes (SpCas9) but also for Neisseria meningitides (NmCas9) CRISPR-Cas systems. Previous research confirmed the successful application of such RNA-guided Cas9 orthologs for effective gene targeting and subsequently genome manipulation. However, Cas9 orthologs need their particular PAM sequence for DNA cleavage activity. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of the target site for the two orthogonals of Cas9 protein, we created a reliable procedure to explore possible gRNA sequences. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. Finally, a complete list of all potential gRNAs along with their locations, strands, and PAMs sequence orientation can be provided for both SpCas9 as well as another potential Cas9 ortholog (NmCas9). The artificial design of potential gRNAs in a genome of interest can accelerate functional genomic studies. Consequently, the application of such novel genome editing tool (CRISPR/Cas technology) will enhance by presenting increased versatility and efficiency.

Keywords: CRISPR/Cas9 genome editing, gRNA mining, SpCas9, NmCas9

Procedia PDF Downloads 255
932 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN

Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran

Abstract:

In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.

Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS

Procedia PDF Downloads 31
931 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser

Authors: Edoardo Schinco

Abstract:

Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.

Keywords: Althusser, enlightenment, Gramsci, ideology

Procedia PDF Downloads 195
930 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 66
929 A Markov Model for the Elderly Disability Transition and Related Factors in China

Authors: Huimin Liu, Li Xiang, Yue Liu, Jing Wang

Abstract:

Background: As one of typical case for the developing countries who are stepping into the aging times globally, more and more older people in China might face the problem of which they could not maintain normal life due to the functional disability. While the government take efforts to build long-term care system and further carry out related policies for the core concept, there is still lack of strong evidence to evaluating the profile of disability states in the elderly population and its transition rate. It has been proved that disability is a dynamic condition of the person rather than irreversible so it means possible to intervene timely on them who might be in a risk of severe disability. Objective: The aim of this study was to depict the picture of the disability transferring status of the older people in China, and then find out individual characteristics that change the state of disability to provide theory basis for disability prevention and early intervention among elderly people. Methods: Data for this study came from the 2011 baseline survey and the 2013 follow-up survey of the China Health and Retirement Longitudinal Study (CHARLS). Normal ADL function, 1~2 ADLs disability,3 or above ADLs disability and death were defined from state 1 to state 4. Multi-state Markov model was applied and the four-state homogeneous model with discrete states and discrete times from two visits follow-up data was constructed to explore factors for various progressive stages. We modeled the effect of explanatory variables on the rates of transition by using a proportional intensities model with covariate, such as gender. Result: In the total sample, state 2 constituent ratio is nearly about 17.0%, while state 3 proportion is blow the former, accounting for 8.5%. Moreover, ADL disability statistics difference is not obvious between two years. About half of the state 2 in 2011 improved to become normal in 2013 even though they get elder. However, state 3 transferred into the proportion of death increased obviously, closed to the proportion back to state 2 or normal functions. From the estimated intensities, we see the older people are eleven times as likely to develop at 1~2 ADLs disability than dying. After disability onset (state 2), progression to state 3 is 30% more likely than recovery. Once in state 3, a mean of 0.76 years is spent before death or recovery. In this model, a typical person in state 2 has a probability of 0.5 of disability-free one year from now while the moderate disabled or above has a probability of 0.14 being dead. Conclusion: On the long-term care cost considerations, preventive programs for delay the disability progression of the elderly could be adopted based on the current disabled state and main factors of each stage. And in general terms, those focusing elderly individuals who are moderate or above disabled should go first.

Keywords: Markov model, elderly people, disability, transition intensity

Procedia PDF Downloads 287
928 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 56
927 Effects of Nutrients Supply on Milk Yield, Composition and Enteric Methane Gas Emissions from Smallholder Dairy Farms in Rwanda

Authors: Jean De Dieu Ayabagabo, Paul A.Onjoro, Karubiu P. Migwi, Marie C. Dusingize

Abstract:

This study investigated the effects of feed on milk yield and quality through feed monitoring and quality assessment, and the consequent enteric methane gas emissions from smallholder dairy farms in drier areas of Rwanda, using the Tier II approach for four seasons in three zones, namely; Mayaga and peripheral Bugesera (MPB), Eastern Savanna and Central Bugesera (ESCB), and Eastern plateau (EP). The study was carried out using 186 dairy cows with a mean live weight of 292 Kg in three communal cowsheds. The milk quality analysis was carried out on 418 samples. Methane emission was estimated using prediction equations. Data collected were subjected to ANOVA. The dry matter intake was lower (p<0.05) in the long dry season (7.24 Kg), with the ESCB zone having the highest value of 9.10 Kg, explained by the practice of crop-livestock integration agriculture in that zone. The Dry matter digestibility varied between seasons and zones, ranging from 52.5 to 56.4% for seasons and from 51.9 to 57.5% for zones. The daily protein supply was higher (p<0.05) in the long rain season with 969 g. The mean daily milk production of lactating cows was 5.6 L with a lower value (p<0.05) during the long dry season (4.76 L), and the MPB zone having the lowest value of 4.65 L. The yearly milk production per cow was 1179 L. The milk fat varied from 3.79 to 5.49% with a seasonal and zone variation. No variation was observed with milk protein. The seasonal daily methane emission varied from 150 g for the long dry season to 174 g for the long rain season (p<0.05). The rain season had the highest methane emission as it is associated with high forage intake. The mean emission factor was 59.4 Kg of methane/year. The present EFs were higher than the default IPPC value of 41 Kg from developing countries in African, the Middle East, and other tropical regions livestock EFs using Tier I approach due to the higher live weight in the current study. The methane emission per unit of milk production was lower in the EP zone (46.8 g/L) due to the feed efficiency observed in that zone. Farmers should use high-quality feeds to increase the milk yield and reduce the methane gas produced per unit of milk. For an accurate assessment of the methane produced from dairy farms, there is a need for the use of the Life Cycle Assessment approach that considers all the sources of emissions.

Keywords: footprint, forage, girinka, tier

Procedia PDF Downloads 202