Search results for: ambidextrous hand
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3646

Search results for: ambidextrous hand

526 Accomplishing Mathematical Tasks in Bilingual Primary Classrooms

Authors: Gabriela Steffen

Abstract:

Learning in a bilingual classroom not only implies learning in two languages or in an L2, it also means learning content subjects through the means of bilingual or plurilingual resources, which is of a qualitatively different nature than ‘monolingual’ learning. These resources form elements of a didactics of plurilingualism, aiming not only at the development of a plurilingual competence, but also at drawing on plurilingual resources for nonlinguistic subject learning. Applying a didactics of plurilingualism allows for taking account of the specificities of bilingual content subject learning in bilingual education classrooms. Bilingual education is used here as an umbrella term for different programs, such as bilingual education, immersion, CLIL, bilingual modules in which one or several non-linguistic subjects are taught partly or completely in an L2. This paper aims at discussing first results of a study on pupil group work in bilingual classrooms in several Swiss primary schools. For instance, it analyses two bilingual classes in two primary schools in a French-speaking region of Switzerland that follows a part of their school program through German in addition to French, the language of instruction in this region. More precisely, it analyses videotaped classroom interaction and in situ classroom practices of pupil group work in a mathematics lessons. The ethnographic observation of pupils’ group work and the analysis of their interaction (analytical tools of conversational analysis, discourse analysis and plurilingual interaction) enhance the description of whole-class interaction done in the same (and several other) classes. While the latter are teacher-student interactions, the former are student-student interactions giving more space to and insight into pupils’ talk. This study aims at the description of the linguistic and multimodal resources (in German L2 and/or French L1) pupils mobilize while carrying out a mathematical task. The analysis shows that the accomplishment of the mathematical task takes place in a bilingual mode, whether the whole-class interactions are conducted rather in a bilingual (German L2-French L1) or a monolingual mode in L2 (German). The pupils make plenty of use of German L2 in a setting that lends itself to use French L1 (peer groups with French as a dominant language, in absence of the teacher and a task with a mathematical aim). They switch from French to German and back ‘naturally’, which is regular for bilingual speakers. Their linguistic resources in German L2 are not sufficient to allow them to (inter-)act well enough to accomplish the task entirely in German L2, despite their efforts to do so. However, this does not stop them from carrying out the task in mathematics adequately, which is the main objective, by drawing on the bilingual resources at hand.

Keywords: bilingual content subject learning, bilingual primary education, bilingual pupil group work, bilingual teaching/learning resources, didactics of plurilingualism

Procedia PDF Downloads 139
525 Evaluation of Existing Wheat Genotypes of Bangladesh in Response to Salinity

Authors: Jahangir Alam, Ayman El Sabagh, Kamrul Hasan, Shafiqul Islam Sikdar, Celaleddin Barutçular, Sohidul Islam

Abstract:

The experiment (Germination test and seedling growth) was carried out at the laboratory of Agronomy Department, Hajee Mohammad Danesh Science and Technology University (HSTU), Dinajpur, Bangladesh during January 2014. Germination and seedling growth of 22 existing wheat genotypes in Bangladesh viz. Kheri, Kalyansona, Sonora, Sonalika, Pavon, Kanchan, Akbar, Barkat, Aghrani, Prativa, Sourab, Gourab, Shatabdi, Sufi, Bijoy, Prodip, BARI Gom 25, BARI Gom 26, BARI Gom 27, BARI Gom 28, Durum and Triticale were tested with three salinity levels (0, 100 and 200 mM NaCl) for 10 days in sand culture in small plastic pot. Speed of germination as expressed by germination percentage (GP), rate of germination (GR), germination coefficient (GC) and germination vigor index (GVI) of all wheat genotypes was delayed and germination percentage was reduced due to salinization compared to control. The lower reduction of GP, GR, GC and VI due to salinity was observed in BARI Gom 25, BARI Gom 27, Shatabdi, Sonora, and Akbbar and higher reduction was recorded in BARI Gom 26, Duram, Triticale, Sufi and Kheri. Shoot and root lengths, fresh and dry weights were found to be affected due to salinization and shoot was more affected than root. Under saline conditions, longer shoot and root length were recorded in BARI Gom 25, BARI Gom 27, Akbar, and Shatabdi, i.e. less reduction of shoot and root lengths was observed while, BARI Gom 26, Duram, Prodip and Triticale produced shorted shoot and root lengths. In this study, genotypes BARI Gom 25, BARI Gom 27, Shatabdi, Sonora and Aghrani showed better performance in terms shoot and root growth (fresh and dry weights) and proved to be tolerant genotypes to salinity. On the other hand, Duram, BARI Gom 26, Triticale, Kheri and Prodip affected seriously in terms of fresh and dry weights by the saline environment. BARI Gom 25, BARI Gom 27, Shatabdi, Sonora and Aghrani showed more salt tolerance index (STI) based on shoot dry weight while, BARI Gom 26, Triticale, Durum, Sufi, Prodip and Kalyanson demonstrate lower STI value under saline conditions. Based on the most salt tolerance and susceptible trait, genotypes under 100 and 200 mM NaCl stresses can be arranged as salt tolerance genotypes: BARI Gom 25> BARI Gom 27> Shatabdi> Sonora, and salt susceptible genotypes: BARI Gom 26> Durum> Triticale> Prodip> Sufi> Kheri. Considering the experiment, it can be concluded that the BARI Gom 25 may be treated as the most salt tolerant and BARI Gom 26 as the most salt sensitive genotypes in Bangladesh.

Keywords: genotypes, germination, salinity, wheat

Procedia PDF Downloads 274
524 Digitization and Economic Growth in Africa: The Role of Financial Sector Development

Authors: Abdul Ganiyu Iddrisu, Bei Chen

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, financial sector development, Africa, economic growth

Procedia PDF Downloads 109
523 Transition towards a Market Society: Commodification of Public Health in India and Pakistan

Authors: Mayank Mishra

Abstract:

Market Economy can be broadly defined as economic system where supply and demand regulate the economy and in which decisions pertaining to production, consumption, allocation of resources, price and competition are made by collective actions of individuals or organisations with limited government intervention. On the other hand Market Society is one where instead of the economy being embedded in social relations, social relations are embedded in the economy. A market economy becomes a market society when all of land, labour and capital are commodified. This transition also has effect on people’s attitude and values. Such a transition commence impacting the non-material aspect of life such as public education, public health and the like. The inception of neoliberal policies in non-market norms altered the nature of social goods like public health that raised the following questions. What impact would the transition to a market society make on people in terms of accessibility to public health? Is healthcare a commodity that can be subjected to a competitive market place? What kind of private investments are being made in public health and how do private investments alter the nature of a public good like healthcare? This research problem will employ empirical-analytical approach that includes deductive reasoning which will be using the existing concept of market economy and market society as a foundation for the analytical framework and the hypotheses to be examined. The research also intends to inculcate the naturalistic elements of qualitative methodology which refers to studying of real world situations as they unfold. The research will analyse the existing literature available on the subject. Concomitantly the research intends to access the primary literature which includes reports from the World Bank, World Health Organisation (WHO) and the different departments of respective ministries of the countries for the analysis. This paper endeavours to highlight how the issue of commodification of public health would lead to perpetual increase in its inaccessibility leading to stratification of healthcare services where one can avail the better services depending on the extent of one’s ability to pay. Since the fundamental maxim of private investments is to churn out profits, these kinds of trends would pose a detrimental effect on the society at large perpetuating the lacuna between the have and the have-nots.The increasing private investments, both, domestic and foreign, in public health sector are leading to increasing inaccessibility of public health services. Despite the increase in various public health schemes the quality and impact of government public health services are on a continuous decline.

Keywords: commodity, India and Pakistan, market society, public health

Procedia PDF Downloads 283
522 Immersive Environment as an Occupant-Centric Tool for Architecture Criticism and Architectural Education

Authors: Golnoush Rostami, Farzam Kharvari

Abstract:

In recent years, developments in the field of architectural education have resulted in a shift from conventional teaching methods to alternative state-of-the-art approaches in teaching methods and strategies. Criticism in architecture has been a key player both in the profession and education, but it has been mostly offered by renowned individuals. Hence, not only students or other professionals but also critics themselves may not have the option to experience buildings and rely on available 2D materials, such as images and plans, that may not result in a holistic understanding and evaluation of buildings. On the other hand, immersive environments provide students and professionals the opportunity to experience buildings virtually and reflect their evaluation by experiencing rather than judging based on 2D materials. Therefore, the aim of this study is to compare the effect of experiencing buildings in immersive environments and 2D drawings, including images and plans, on architecture criticism and architectural education. As a result, three buildings that have parametric brick facades were studied through 2D materials and in Unreal Engine v. 24 as an immersive environment among 22 architecture students that were selected using convenient sampling and were divided into two equal groups using simple random sampling. This study used mixed methods, including quantitative and qualitative methods; the quantitative section was carried out by a questionnaire, and deep interviews were used for the qualitative section. A questionnaire was developed for measuring three constructs, including privacy regulation based on Altman’s theory, the sufficiency of illuminance levels in the building, and the visual status of the view (visually appealing views based on obstructions that may have been caused by facades). Furthermore, participants had the opportunity to reflect their understanding and evaluation of the buildings in individual interviews. Accordingly, the collected data from the questionnaires were analyzed using independent t-test and descriptive analyses in IBM SPSS Statistics v. 26, and interviews were analyzed using the content analysis method. The results of the interviews showed that the participants who experienced the buildings in the immersive environment were able to have a thorough and more precise evaluation of the buildings in comparison to those who studied them through 2D materials. Moreover, the analyses of the respondents’ questionnaires revealed that there were statistically significant differences between measured constructs among the two groups. The outcome of this study suggests that integrating immersive environments into the profession and architectural education as an effective and efficient tool for architecture criticism is vital since these environments allow users to have a holistic evaluation of buildings for vigorous and sound criticism.

Keywords: immersive environments, architecture criticism, architectural education, occupant-centric evaluation, pre-occupancy evaluation

Procedia PDF Downloads 108
521 Rheological and Sensory Attributes of Dough and Crackers Including Amaranth Flour (Amaranthus spp.)

Authors: Claudia Cabezas-Zabala, Jairo Lindarte-Artunduaga, Carlos Mario Zuluaga-Dominguez

Abstract:

Amaranth is an emerging pseudocereal rich in such essential nutrients as protein and dietary fiber, which was employed as an ingredient in the formulation of crackers to evaluate the rheological performance and sensory acceptability of the obtained food. A completely randomized factorial design was used with two factors: (A) ratio of wheat and amaranth flour used in the preparation of the dough, in proportion 90:10 and 80:20 (% w/w) and (B) two levels of inulin addition of 8.4% and 16.7 %, having two control doughs made from amaranth and wheat flour, respectively. Initially, the functional properties of the formulations mentioned were measured, showing no significant differences in the water absorption capacity (WAC) and swelling power (SP), having mean values between 1.66 and 1.81 g/g for WAC and between 1.75 and 1.86 g/g for SP, respectively. The amaranth flour had the highest water holding capacity (WHR) of 8.41 ± 0.15 g/g and emulsifying activity (EA) of 74.63 ± 1.89 g/g. Moreover, the rheological behavior, measured through the use of farinograph, extensograph, Mixolab, and falling index, showed that the formulation containing 20% of amaranth flour and 7.16% of inulin had a rheological behavior similar to the control produced exclusively with wheat flour, being the former, the one selected for the preparation of crackers. For this formulation, the farinograph showed a mixing tolerance index of 11 UB, indicating a strong and cohesive dough; likewise, the Mixolab showed dough reaches stability at 6.47 min, indicating a good resistance to mixing. On the other hand, the extensograph exhibited a dough resistance of 637 UB, as well as extensibility of 13.4 mm, which corresponds to a strong dough capable of resisting the laminate. Finally, the falling index was 318 s, which indicates the crumb will retain enough air to enhance the crispness of a characteristic cracker. Finally, a sensory consumer test did not show significant differences in the evaluation of aroma between the control and the selected formulation, while this latter had a significantly lower rating in flavor. However, a purchase intention of 70 % was observed among the population surveyed. The results obtained in this work give perspectives for the industrial use of amaranth in baked goods. Additionally, amaranth has been a product typically linked to indigenous populations in the Andean South American countries; therefore, the search for diversification and alternatives of use for this pseudocereal has an impact on the social and economic conditions of such communities. The technological versatility and nutritional quality of amaranth is an advantage for consumers, favoring the consumption of healthy products with important contributions of dietary fiber and protein.

Keywords: amaranth, crackers, rheology, pseudocereals, kneaded products

Procedia PDF Downloads 98
520 The Markers -mm and dämmo in Amharic: Developmental Approach

Authors: Hayat Omar

Abstract:

Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.

Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics

Procedia PDF Downloads 110
519 Improving the Crashworthiness Characteristics of Long Steel Circular Tubes Subjected to Axial Compression by Inserting a Helical Spring

Authors: Mehdi Tajdari, Farzad Mokhtarnejad, Fatemeh Moradi, Mehdi Najafizadeh

Abstract:

Nowadays, energy absorbing devices have been widely used in all vehicles and moving parts such as railway couches, aircraft, ships and lifts. The aim is to protect these structures from serious damages while subjected to impact loads, or to minimize human injuries while collision is occurred in transportation systems. These energy-absorbing devices can dissipate kinetic energy in a wide variety of ways like friction, facture, plastic bending, crushing, cyclic plastic deformation and metal cutting. On the other hand, various structures may be used as collapsible energy absorbers. Metallic cylindrical tubes have attracted much more attention due to their high stiffness and strength combined with the low weight and ease of manufacturing process. As a matter of fact, favorable crash worthiness characteristics for energy dissipation purposes can be achieved from axial collapse of tubes while they crush progressively in symmetric modes. However, experimental and theoretical results have shown that depending on various parameters such as tube geometry, material properties of tube, boundary and loading conditions, circular tubes buckle in different modes of deformation, namely, diamond and Euler collapsing modes. It is shown that when the tube length is greater than the critical length, the tube deforms in overall Euler buckling mode, which is an inefficient mode of energy absorption and needs to be avoided in crash worthiness applications. This study develops a new method with the aim of improving energy absorption characteristics of long steel circular tubes. Inserting a helical spring into the tubes is proved experimentally to be an efficient solution. In fact when a long tube is subjected to axial compression load, the spring prevents of undesirable Euler or diamond collapsing modes. This is because the spring reinforces the internal wall of tubes and it causes symmetric deformation in tubes. In this research three specimens were prepared and three tests were performed. The dimensions of tubes were selected so that in axial compression load buckling is occurred. In the second and third tests a spring was inserted into tubes and they were subjected to axial compression load in quasi-static and impact loading, respectively. The results showed that in the second and third tests buckling were not happened and the tubes deformed in symmetric modes which are desirable in energy absorption.

Keywords: energy absorption, circular tubes, collapsing deformation, crashworthiness

Procedia PDF Downloads 314
518 Boiler Ash as a Reducer of Formaldehyde Emission in Medium-Density Fiberboard

Authors: Alexsandro Bayestorff da Cunha, Dpebora Caline de Mello, Camila Alves Corrêa

Abstract:

In the production of fiberboards, an adhesive based on urea-formaldehyde resin is used, which has the advantages of low cost, homogeneity of distribution, solubility in water, high reactivity in an acid medium, and high adhesion to wood. On the other hand, as a disadvantage, there is low resistance to humidity and the release of formaldehyde. The objective of the study was to determine the viability of adding industrial boiler ash to the urea formaldehyde-based adhesive for the production of medium-density fiberboard. The raw material used was composed of Pinus spp fibers, urea-formaldehyde resin, paraffin emulsion, ammonium sulfate, and boiler ash. The experimental plan, consisting of 8 treatments, was completely randomized with a factorial arrangement, with 0%, 1%, 3%, and 5% ash added to the adhesive, with and without the application of a catalyst. In each treatment, 4 panels were produced with density of 750 kg.m⁻³, dimensions of 40 x 40 x 1,5 cm, 12% urea formaldehyde resin, 1% paraffin emulsion and hot pressing at a temperature of 180ºC, the pressure of 40 kgf/cm⁻² for a time of 10 minutes. The different compositions of the adhesive were characterized in terms of viscosity, pH, gel time and solids, and the panels by physical and mechanical properties, in addition to evaluation using the IMAL DPX300 X-ray densitometer and formaldehyde emission by the perforator method. The results showed a significant reduction of all adhesive properties with the use of the catalyst, regardless of the treatment; while the percentage increase of ashes provided an increase in the average values of viscosity, gel time, and solids and a reduction in pH for the panels with a catalyst; for panels without catalyst, the behavior was the opposite, with the exception of solids. For the physical properties, the results of the variables of density, compaction ratio, and thickness were equivalent and in accordance with the standard, while the moisture content was significantly reduced with the use of the catalyst but without the influence of the percentage of ash. The density profile for all treatments was characteristic of medium-density fiberboard, with more compacted and dense surfaces when compared to the central layer. For thickness, the swelling was not influenced by the catalyst and the use of ash, presenting average values within the normalized parameters. For mechanical properties, the influence of ashes on the adhesive was negatively observed in the modulus of rupture from 1% and in the traction test from 3%; however, only this last property, in the percentages of 3% and 5%, were below the minimum limit of the norm. The use of catalyst and ashes with percentages of 3% and 5% reduced the formaldehyde emission of the panels; however, only the panels that used adhesive with catalyst presented emissions below 8mg of formaldehyde / 100g of the panel. In this way, it can be said that boiler ash can be added to the adhesive with a catalyst without impairing the technological properties by up to 1%.

Keywords: reconstituted wood panels, formaldehyde emission, technological properties of panels, perforator

Procedia PDF Downloads 44
517 Experimental Investigation of Hydrogen Addition in the Intake Air of Compressed Engines Running on Biodiesel Blend

Authors: Hendrick Maxil Zárate Rocha, Ricardo da Silva Pereira, Manoel Fernandes Martins Nogueira, Carlos R. Pereira Belchior, Maria Emilia de Lima Tostes

Abstract:

This study investigates experimentally the effects of hydrogen addition in the intake manifold of a diesel generator operating with a 7% biodiesel-diesel oil blend (B7). An experimental apparatus setup was used to conduct performance and emissions tests in a single cylinder, air cooled diesel engine. This setup consisted of a generator set connected to a wirewound resistor load bank that was used to vary engine load. In addition, a flowmeter was used to determine hydrogen volumetric flowrate and a digital anemometer coupled with an air box to measure air flowrate. Furthermore, a digital precision electronic scale was used to measure engine fuel consumption and a gas analyzer was used to determine exhaust gas composition and exhaust gas temperature. A thermopar was installed near the exhaust collection to measure cylinder temperature. In-cylinder pressure was measured using an AVL Indumicro data acquisition system with a piezoelectric pressure sensor. An AVL optical encoder was installed in the crankshaft and synchronized with in-cylinder pressure in real time. The experimental procedure consisted of injecting hydrogen into the engine intake manifold at different mass concentrations of 2,6,8 and 10% of total fuel mass (B7 + hydrogen), which represented energy fractions of 5,15, 20 and 24% of total fuel energy respectively. Due to hydrogen addition, the total amount of fuel energy introduced increased and the generators fuel injection governor prevented any increases of engine speed. Several conclusions can be stated from the test results. A reduction in specific fuel consumption as a function of hydrogen concentration increase was noted. Likewise, carbon dioxide emissions (CO2), carbon monoxide (CO) and unburned hydrocarbons (HC) decreased as hydrogen concentration increased. On the other hand, nitrogen oxides emissions (NOx) increased due to average temperatures inside the cylinder being higher. There was also an increase in peak cylinder pressure and heat release rate inside the cylinder, since the fuel ignition delay was smaller due to hydrogen content increase. All this indicates that hydrogen promotes faster combustion and higher heat release rates and can be an important additive to all kind of fuels used in diesel generators.

Keywords: diesel engine, hydrogen, dual fuel, combustion analysis, performance, emissions

Procedia PDF Downloads 331
516 Understanding Strategic Engagement on the Conversation Table: Countering Terrorism in Nigeria

Authors: Anisah Ari

Abstract:

Effects of organized crime permeate all facets of life, including public health, socio-economic endeavors, and human security. If any element of this is affected, it impacts large-scale national and global interest. Seeking to address terrorist networks through technical thinking is like trying to kill a weed by just cutting off its branches. It will re-develop and expand in proportions beyond one’s imagination, even in horrific ways that threaten human security. The continent of Africa has been bedeviled by this menace, with little or no solution to the problem. Nigeria is dealing with a protracted insurgency that is perpetrated by a sect against any form of westernization. Reimagining approaches to dealing with pressing issues like terrorism may require engaging the right set of people in the conversation for any sustainable change. These are people who have lived through the daily effects of the violence that ensues from the activities of terrorist activities. Effective leadership is required for an inclusive process, where spaces are created for diverse voices to be heard, and multiple perspectives are listened to, and not just heard, that supports a determination of the realistic outcome. Addressing insurgency in Nigeria has experienced a lot of disinformation and uncertainty. This may be in part due to poor leadership or an iteration of technical solutions to adaptive challenge peacemaking efforts in Nigeria has focused on behaviors, attitudes and practices that contribute to violence. However, it is important to consider the underlying issues that build-up, ignite and fan the flames of violence—looking at conflict as a complex system, issues like climate change, low employment rates, corruption and the impunity of discrimination due to ethnicity and religion. This article will be looking at an option of the more relational way of addressing insurgency through adaptive approaches that embody engagement and solutions with the people rather than for the people. The construction of a local turn in peacebuilding is informed by the need to create a locally driven and sustained peace process that embodies the culture and practices of the people in enacting an everyday peace beyond just a perennial and universalist outlook. A critical analysis that explores the socially identified individuals and situations will be made, considering the more adaptive approach to a complex existential challenge rather than a universalist frame. Case Study and Ethnographic research approach to understand what other scholars have documented on the matter and also a first-hand understanding of the experiences and viewpoints of the participants.

Keywords: terrorism, adaptive, peace, culture

Procedia PDF Downloads 79
515 A Novel Nano-Chip Card Assay as Rapid Test for Diagnosis of Lymphatic Filariasis Compared to Nano-Based Enzyme Linked Immunosorbent Assay

Authors: Ibrahim Aly, Manal Ahmed, Mahmoud M. El-Shall

Abstract:

Filariasis is a parasitic disease caused by small roundworms. The filarial worms are transmitted and spread by blood-feeding black flies and mosquitoes. Lymphatic filariasis (Elephantiasis) is caused by Wuchereriabancrofti, Brugiamalayi, and Brugiatimori. Elimination of Lymphatic filariasis necessitates an increasing demand for valid, reliable, and rapid diagnostic kits. Nanodiagnostics involve the use of nanotechnology in clinical diagnosis to meet the demands for increased sensitivity, specificity, and early detection in less time. The aim of this study was to evaluate the nano-based enzymelinked immunosorbent assay (ELISA) and novel nano-chip card as a rapid test for detection of filarial antigen in serum samples of human filariasis in comparison with traditional -ELISA. Serum samples were collected from an infected human with filarial gathered across Egypt's governorates. After receiving informed consenta total of 45 blood samples of infected individuals residing in different villages in Gharbea governorate, which isa nonendemic region for bancroftianfilariasis, healthy persons living in nonendemic locations (20 persons), as well as sera from 20 other parasites, affected patients were collected. The microfilaria was checked in thick smears of 20 µl night blood samples collected during 20-22 hrs. All of these individuals underwent the following procedures: history taking, clinical examination, and laboratory investigations, which included examination of blood samples for microfilaria using thick blood film and serological tests for detection of the circulating filarial antigen using polyclonal antibody- ELISA, nano-based ELISA, and nano-chip card. In the present study, a recently reported polyoclonal antibody specific to tegumental filarial antigen was used in developing nano-chip card and nano-ELISA compared to traditional ELISA for the detection of circulating filarial antigen in sera of patients with bancroftianfilariasis. The performance of the ELISA was evaluated using 45 serum samples. The ELISA was positive with sera from microfilaremicbancroftianfilariasis patients (n = 36) with a sensitivity of 80 %. Circulating filarial antigen was detected in 39/45 patients who were positive for circulating filarial antigen using nano-ELISA with a sensitivity of 86.6 %. On the other hand, 42 out of 45 patients were positive for circulating filarial antigen using nano-chip card with a sensitivity of 93.3%.In conclusion, using a novel nano-chip assay could potentially be a promising alternative antigen detection test for bancroftianfilariasis.

Keywords: lymphatic filariasis, nanotechnology, rapid diagnosis, elisa technique

Procedia PDF Downloads 95
514 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection

Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan

Abstract:

Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.

Keywords: computed tomography, barium sulfate shield, dose saving, image quality

Procedia PDF Downloads 245
513 Evaluation on the Compliance of Essential Intrapartum Newborn Care among Nurses in Selected Government Hospital in Manila

Authors: Eliza Torrigue, Efrelyn Iellamo

Abstract:

Maternal death is one of the rising health issues in the Philippines. It is alarming to know that in every hour of each day, a mother gives birth to a child who may not live to see the next day. Statistics shows that intrapartum period and third stage of labor are the very crucial periods for the expectant mother, as well as the first six hours of life for the newborn. To address the issue, The Essential Intrapartum Newborn Care (EINC) was developed. Through this, Obstetric Delivery Room (OB-DR) Nurses shall be updated with the evidence-based maternal and newborn care to ensure patient safety, thus, reducing maternal and child mortality. This study aims to describe the compliance of hospitals, especially of OB-DR nurses, to the EINC Protocols. The researcher aims to link the profile variables of the respondents in terms of age, length of service and formal training to their compliance on the EINC Protocols. The outcome of the study is geared towards the development of appropriate training program for OB-DR Nurses assigned in the delivery room of the hospitals based on the study’s results to sustain the EINC standards. A descriptive correlational method was used. The sample consists of 75 Obstetric Delivery Room (OB-DR) Nurses from three government hospitals in the City of Manila namely, Ospital ng Maynila Medical Center, Tondo Medical Center, and Gat Andres Bonifacio Memorial Medical Center. Data were collected using an evaluative checklist. Ranking, weighted mean, Chi-square and Pearson’s R were used to analyze data. The level of compliance to the EINC Protocols by the respondents was evaluated with an overall mean score of 4.768 implying that OB-DR Nurses have a high regard in complying with the step by step procedure of the EINC. Furthermore, data shows that formal training on EINC have a significant relationship with OB-DR Nurses’ level of compliance during cord care, AMTSL, and immediate newborn care until the first ninety minutes to six hours of life. However, the respondents’ age and length of service do not have a significant relationship with the compliance of OB-DR Nurses on EINC Protocols. In the pursuit of decreasing the maternal mortality in the Philippines, EINC Protocols have been widely implemented in the country especially in the government hospitals where most of the deliveries happen. In this study, it was found out that OB-DR Nurses adhere and are highly compliant to the standards in order to assure that optimum level of care is delivered to the mother and newborn. Formal training on EINC, on the other hand, create the most impact on the compliance of nurses. It is therefore recommended that there must be a structured enhancement training program to plan, implement and evaluate the EINC protocols in these government hospitals.

Keywords: compliance, intrapartum, newborn care, nurses

Procedia PDF Downloads 371
512 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City

Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.

Abstract:

Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.

Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy

Procedia PDF Downloads 34
511 Euthanasia Reconsidered: Voting and Multicriteria Decision-Making in Medical Ethics

Authors: J. Hakula

Abstract:

Discussion on euthanasia is a continuous process. Euthanasia is defined as 'deliberately ending a patient's life by administering life-ending drugs at the patient's explicit request'. With few exceptions, worldwide in most countries human societies have not been able to agree on some fundamental issues concerning ultimate decisions of life and death. Outranking methods in voting oriented social choice theory and multicriteria decision-making (MCDM) can be applied to issues in medical ethics. There is a wide range of voting methods, and using different methods the same group of voters can end up with different outcomes. In the MCDM context, decision alternatives can be substituted for candidates, and criteria for voters. The view chosen here is that of a single decision-maker. Initially, three alternatives and three criteria are chosen. Pairwise and basic positional voting rules - plurality, anti-plurality and the Borda count - are applied. In the MCDM solution, criteria are put weights by giving them the more 'votes'; the more important the decision-maker ranks them. A hypothetical example on evaluating properties of euthanasia consists of three alternatives A, B, and C, which are ranked according to three criteria - the patient’s willingness to cooperate, general action orientation (active/passive), and cost-effectiveness - the criteria having weights 7, 5, and 4, respectively. Using the plurality rule and the weights given to criteria, A is the best alternative, B and C thereafter. In pairwise comparisons, both B and C defeat A with weight scores 7 to 9. On the other hand, B is defeated by C with weights 11 to 5. Thus, C (i.e. the so-called Condorcet winner) defeats both A and B. The best alternative using the plurality principle is not necessarily the best in the pairwise sense, the conflict remaining unsolved with or without additional weights. Positional rules are sensitive to variations in alternative sets. In the example above, the plurality rule gives the rank ABC. If we leave out C, the plurality ranking between A and B results in BA. Withdrawing B or A the ranking is CA and CB, respectively. In pairwise comparisons an analogous problem emerges when the number of criteria is varied. Cyclic preferences may lead to a total tie, and no (rational) choice between the alternatives can be made. In conclusion, the choice of the best commitment to re-evaluate euthanasia, with criteria left unchanged, depends entirely on the evaluation method used. The right strategies matter, too. Future studies might concern the problem of an abstention - a situation where voters do not vote - and still their best candidate may win. Or vice versa, actively giving the ballot to their first rank choice might lead to a total loss. In MCDM terms, a decision might occur where some central criteria are not actively involved in the best choice made.

Keywords: medical ethics, euthanasia, voting methods, multicriteria decision-making

Procedia PDF Downloads 128
510 Study of Oxidative Stability, Cold Flow Properties and Iodine Value of Macauba Biodiesel Blends

Authors: Acacia A. Salomão, Willian L. Gomes da Silva, Gustavo G. Shimamoto, Matthieu Tubino

Abstract:

Biodiesel physical and chemical properties depend on the raw material composition used in its synthesis. Saturated fatty acid esters confer high oxidative stability, while unsaturated fatty acid esters improve the cold flow properties. In this study, an alternative vegetal source - the macauba kernel oil - was used in the biodiesel synthesis instead of conventional sources. Macauba can be collected from native palm trees and is found in several regions in Brazil. Its oil is a promising source when compared to several other oils commonly obtained from food products, such as soybean, corn or canola oil, due to its specific characteristics. However, the usage of biodiesel made from macauba oil alone is not recommended due to the difficulty of producing macauba in large quantities. For this reason, this project proposes the usage of blends of the macauba oil with conventional oils. These blends were prepared by mixing the macauba biodiesel with biodiesels obtained from soybean, corn, and from residual frying oil, in the following proportions: 20:80, 50:50 e 80:20 (w/w). Three parameters were evaluated, using the standard methods, in order to check the quality of the produced biofuel and its blends: oxidative stability, cold filter plugging point (CFPP), and iodine value. The induction period (IP) expresses the oxidative stability of the biodiesel, the CFPP expresses the lowest temperature in which the biodiesel flows through a filter without plugging the system and the iodine value is a measure of the number of double bonds in a sample. The biodiesels obtained from soybean, residual frying oil and corn presented iodine values higher than 110 g/100 g, low oxidative stability and low CFPP. The IP values obtained from these biodiesels were lower than 8 h, which is below the recommended standard value. On the other hand, the CFPP value was found within the allowed limit (5 ºC is the maximum). Regarding the macauba biodiesel, a low iodine value was observed (31.6 g/100 g), which indicates the presence of high content of saturated fatty acid esters. The presence of saturated fatty acid esters should imply in a high oxidative stability (which was found accordingly, with IP = 64 h), and high CFPP, but curiously the latter was not observed (-3 ºC). This behavior can be explained by looking at the size of the carbon chains, as 65% of this biodiesel is composed by short chain saturated fatty acid esters (less than 14 carbons). The high oxidative stability and the low CFPP of macauba biodiesel are what make this biofuel a promising source. The soybean, corn and residual frying oil biodiesels also have low CFPP, but low oxidative stability. Therefore the blends proposed in this work, if compared to the common biodiesels, maintain the flow properties but present enhanced oxidative stability.

Keywords: biodiesel, blends, macauba kernel oil, stability oxidative

Procedia PDF Downloads 507
509 Static Charge Control Plan for High-Density Electronics Centers

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.

Keywords: electrostatics, ESD protocols, HBM, static charge control

Procedia PDF Downloads 106
508 Strengthening by Assessment: A Case Study of Rail Bridges

Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas

Abstract:

The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.

Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening

Procedia PDF Downloads 286
507 Pyridine-N-oxide Based AIE-active Triazoles: Synthesis, Morphology and Photophysical Properties

Authors: Luminita Marin, Dalila Belei, Carmen Dumea

Abstract:

Aggregation induced emission (AIE) is an intriguing optical phenomenon recently evidenced by Tang and his co-workers, for which aggregation works constructively in the improving of light emission. The AIE challenging phenomenon is quite opposite to the notorious aggregation caused quenching (ACQ) of light emission in the condensed phase, and comes in line with requirements of photonic and optoelectronic devices which need solid state emissive substrates. This paper reports a series of ten new aggregation induced emission (AIE) low molecular weight compounds based on triazole and pyridine-N-oxide heterocyclic units bonded by short flexible chains, obtained by a „click” chemistry reaction. The compounds present extremely weak luminescence in solution but strong light emission in solid state. To distinguish the influence of the crystallinity degree on the emission efficiency, the photophysical properties were explored by UV-vis and photoluminescence spectroscopy in solution, water suspension, amorphous and crystalline films. On the other hand, the compound morphology of the up mentioned states was monitored by dynamic light scattering, scanning electron microscopy, atomic force microscopy and polarized light microscopy methods. To further understand the structural design – photophysical properties relationship, single crystal X-ray diffraction on some understudy compounds was performed too. The UV-vis absorption spectra of the triazole water suspensions indicated a typical behaviour for nanoparticle formation, while the photoluminescence spectra revealed an emission intensity enhancement up to 921-fold higher of the crystalline films compared to solutions, clearly indicating an AIE behaviour. The compounds have the tendency to aggregate forming nano- and micro- crystals in shape of rose-like and fibres. The crystals integrity is kept due to the strong lateral intermolecular forces, while the absence of face-to-face forces explains the enhanced luminescence in crystalline state, in which the intramolecular rotations are restricted. The studied flexible triazoles draw attention to a new structural design in which small biologically friendly luminophore units are linked together by small flexible chains. This design enlarges the variety of the AIE luminogens to the flexible molecules, guiding further efforts in development of new AIE structures for appropriate applications, the biological ones being especially envisaged.

Keywords: aggregation induced emission, pyridine-N-oxide, triazole

Procedia PDF Downloads 433
506 The Debate over Dutch Universities: An Analysis of Stakeholder Perspectives

Authors: B. Bernabela, P. Bles, A. Bloecker, D. DeRock, M. van Es, M. Gerritse, T. de Jongh, W. Lansing, M. Martinot, J. van de Wetering

Abstract:

A heated debate has been taking place concerning research and teaching at Dutch universities for the last few years. The ministry of science and education has published reports on its strategy to improve university curricula and position the Netherlands as a globally competitive knowledge economy. These reports have provoked an uproar of responses from think tanks, concerned academics, and the media. At the center of the debate is disagreement over who should determine the Dutch university curricula and how these curricula should look. Many stakeholders in the higher education system have voiced their opinion, and some have not been heard. The result is that the diversity of visions is ignored or taken for granted in the official reports. Recognizing this gap in stakeholder analysis, the aim of this paper is to bring attention to the wide range of perspectives on who should be responsible for designing higher education curricula. Based on a previous analysis by the Rathenau Institute, we distinguish five different groups of stakeholders: government, business sector, university faculty and administration, students, and the societal sector. We conducted semi-structured, in-depth interviews with representatives from each stakeholder group, and distributed quantitative questionnaires to people in the societal sector (i.e. people not directly affiliated with universities or graduates). Preliminary data suggests that the stakeholders have different target points concerning the university curricula. Representatives from the governmental sector tend to place special emphasis on the link between research and education, while representatives from the business sector rather focus on greater opportunities for students to obtain practical experience in the job market. Responses from students reflect a belief that they should be able to influence the curriculum in order to compete with other students on the international job market. On the other hand, university faculty expresses concern that focusing on the labor market puts undue pressure on students and compromises the quality of education. Interestingly, the opinions of members of ‘society’ seem to be relatively unchanged by political and economic shifts. Following a comprehensive analysis of the data, we believe that our results will make a significant contribution to the debate on university education in the Netherlands. These results should be regarded as a foundation for further research concerning the direction of Dutch higher education, for only if we take into account the different opinions and views of the various stakeholders can we decide which steps to take. Moreover, the Dutch experience offers lessons to other countries as well. As the internationalization of higher education is occurring faster than ever before, universities throughout Europe and globally are experiencing many of the same pressures.

Keywords: Dutch University curriculum, higher education, participants’ opinions, stakeholder perspectives

Procedia PDF Downloads 319
505 Generative Behaviors and Psychological Well-Being in Mexican Elders

Authors: Ana L. Gonzalez-Celis, Edgardo Ruiz-Carrillo, Karina Reyes-Jarquin, Margarita Chavez-Becerra

Abstract:

Since recent decades, the aging has been viewed from a more positive perspective, where is not only about losses and damage, but also about being on a stage where you can enjoy life and live with well-being and quality of life. The challenge to feel better is to find those resources that seniors have. For that reason, psychological well-being has shown interest in the study of the affect and life satisfaction (hedonic well-being), while from a more recent tradition, focus on the development of capabilities and the personal growth, considering both as the main indicators of the quality of life. A resource that can be used in the later age is generativity, which refers to the ability of older people to develop and grow through activities that contribute with the improvement of the context in which they live and participate. In this way the generative interest is understood as a favourable attitude that contribute to the common benefit while strengthening and enriching the social institutions, to ensure continuity between generations and social development. On the other hand, generative behavior, differentiating from generative interest, is the expression of that attitude reflected in activities that make a social contribution and a benefit for generations to come. Hence the purpose of the research was to test if there is an association between the generative behaviour type and the psychological well-being with their dimensions. For this reason 188 Mexican adults from 60 to 94 years old (M = 69.78), 67% women, 33% men, completed two instruments: The Ryff’s Well-Being Scales to measure psychological well-being with 39 items with two dimensions (Hedonic and Eudaimonic well-being), and the Loyola’s Generative Behaviors Scale, grouped in five categories: Knowledge transmitted to the next generation, things to be remember, creativity, be productive, contribution to the community, and responsibility of other people. In addition, the socio-demographic data sheet was tested, and self-reported health status. The results indicated that the psychological well-being and its dimensions were significantly associated with the presence of generative behavior, where the level of well-being was higher when the frequency of some generative behaviour excelled; finding that the behavior with greater psychological well-being (M = 81.04, SD = 8.18) was "things to be remembered"; while with greater hedonic well-being (M = 73.39, SD = 12.19) was the behavior "responsibility of other people"; and with greater Eudaimonic well-being (M = 84.61, SD = 6.63), was the behavior "things to be remembered”. The most important findings highlight the importance of generative behaviors in adulthood, finding empirical evidence that the generativity in the last stage of life is associated with well-being. However, by finding differences in the types of generative behaviors at the level of well-being, is proposed the idea that generativity is not situated as an isolated construct, but needs other contextualized and related constructs that can simultaneously operate at different levels, taking into account the relationship between the environment and the individual, encompassing both the social and psychological dimension.

Keywords: eudaimonic well-being, generativity, hedonic well-being, Mexican elders, psychological well-being

Procedia PDF Downloads 243
504 Chronic wrist pain among handstand practitioners. A questionnaire study.

Authors: Martonovich Noa, Maman David, Alfandari Liad, Behrbalk Eyal.

Abstract:

Introduction: The human body is designed for upright standing and walking, with the lower extremities and axial skeleton supporting weight-bearing. Constant weight-bearing on joints not meant for this action can lead to various pathologies, as seen in wheelchair users. Handstand practitioners use their wrists as weight-bearing joints during activities, but little is known about wrist injuries in this population. This study aims to investigate the epidemiology of wrist pain among handstand practitioners, as no such data currently exist. Methods: The study is a cross-sectional online survey conducted among athletes who regularly practice handstands. Participants were asked to complete a three-part questionnaire regarding their workout regimen, training habits, and history of wrist pain. The inclusion criteria were athletes over 18 years old who practice handstands more than twice a month for at least 4 months. All data were collected using Google Forms, organized and anonymized using Microsoft Excel, and analyzed using IBM SPSS 26.0. Descriptive statistics were calculated, and potential risk factors were tested using asymptotic t-tests and Fisher's tests. Differences were considered significant when p < 0.05. Results: This study surveyed 402 athletes who regularly practice handstands to investigate the prevalence of chronic wrist pain and potential risk factors. The participants had a mean age of 31.3 years, with most being male and having an average of 5 years of training experience. 56% of participants reported chronic wrist pain, and 14.4% reported a history of distal radial fracture. Yoga was the most practiced form, followed by Capoeira. No significant differences were found in demographic data between participants with and without chronic wrist pain, and no significant associations were found between chronic wrist pain prevalence and warm-up routines or protective aids. Conclusion: The lower half of the body is meant to handle weight-bearing and impact, while transferring the load to upper extremities can lead to various pathologies. Athletes who perform handstands are particularly prone to chronic wrist pain, which affects over half of them. Warm-up sessions and protective instruments like wrist braces do not seem to prevent chronic wrist pain, and there are no significant differences in age or training volume between athletes with and without the condition. Further research is needed to understand the causes of chronic wrist pain in athletes, given the growing popularity of sports and activities that can cause this type of injury.

Keywords: handstand, handbalance, wrist pain, hand and wrist surgery, yoga, calisthenics, circus, capoeira, movement.

Procedia PDF Downloads 65
503 Impact of Weather Conditions on Non-Food Retailers and Implications for Marketing Activities

Authors: Noriyuki Suyama

Abstract:

This paper discusses purchasing behavior in retail stores, with a particular focus on the impact of weather changes on customers' purchasing behavior. Weather conditions are one of the factors that greatly affect the management and operation of retail stores. However, there is very little research on the relationship between weather conditions and marketing from an academic perspective, although there is some importance from a practical standpoint and knowledge based on experience. For example, customers are more hesitant to go out when it rains than when it is sunny, and they may postpone purchases or buy only the minimum necessary items even if they do go out. It is not difficult to imagine that weather has a significant impact on consumer behavior. To the best of the authors' knowledge, there have been only a few studies that have delved into the purchasing behavior of individual customers. According to Hirata (2018), the economic impact of weather in the United States is estimated to be 3.4% of GDP, or "$485 billion ± $240 billion per year. However, weather data is not yet fully utilized. Representative industries include transportation-related industries (e.g., airlines, shipping, roads, railroads), leisure-related industries (e.g., leisure facilities, event organizers), energy and infrastructure-related industries (e.g., construction, factories, electricity and gas), agriculture-related industries (e.g., agricultural organizations, producers), and retail-related industries (e.g., retail, food service, convenience stores, etc.). This paper focuses on the retail industry and advances research on weather. The first reason is that, as far as the author has investigated the retail industry, only grocery retailers use temperature, rainfall, wind, weather, and humidity as parameters for their products, and there are very few examples of academic use in other retail industries. Second, according to NBL's "Toward Data Utilization Starting from Consumer Contact Points in the Retail Industry," labor productivity in the retail industry is very low compared to other industries. According to Hirata (2018) mentioned above, improving labor productivity in the retail industry is recognized as a major challenge. On the other hand, according to the "Survey and Research on Measurement Methods for Information Distribution and Accumulation (2013)" by the Ministry of Internal Affairs and Communications, the amount of data accumulated by each industry is extremely large in the retail industry, so new applications are expected by analyzing these data together with weather data. Third, there is currently a wealth of weather-related information available. There are, for example, companies such as WeatherNews, Inc. that make weather information their business and not only disseminate weather information but also disseminate information that supports businesses in various industries. Despite the wide range of influences that weather has on business, the impact of weather has not been a subject of research in the retail industry, where business models need to be imagined, especially from a micro perspective. In this paper, the author discuss the important aspects of the impact of weather on marketing strategies in the non-food retail industry.

Keywords: consumer behavior, weather marketing, marketing science, big data, retail marketing

Procedia PDF Downloads 52
502 On-Chip Ku-Band Bandpass Filter with Compact Size and Wide Stopband

Authors: Jyh Sheen, Yang-Hung Cheng

Abstract:

This paper presents a design of a microstrip bandpass filter with a compact size and wide stopband by using 0.15-μm GaAs pHEMT process. The wide stop band is achieved by suppressing the first and second harmonic resonance frequencies. The slow-wave coupling stepped impedance resonator with cross coupled structure is adopted to design the bandpass filter. A two-resonator filter was fabricated with 13.5GHz center frequency and 11% bandwidth was achieved. The devices are simulated using the ADS design software. This device has shown a compact size and very low insertion loss of 2.6 dB. Microstrip planar bandpass filters have been widely adopted in various communication applications due to the attractive features of compact size and ease of fabricating. Various planar resonator structures have been suggested. In order to reach a wide stopband to reduce the interference outside the passing band, various designs of planar resonators have also been submitted to suppress the higher order harmonic frequencies of the designed center frequency. Various modifications to the traditional hairpin structure have been introduced to reduce large design area of hairpin designs. The stepped-impedance, slow-wave open-loop, and cross-coupled resonator structures have been studied to miniaturize the hairpin resonators. In this study, to suppress the spurious harmonic bands and further reduce the filter size, a modified hairpin-line bandpass filter with cross coupled structure is suggested by introducing the stepped impedance resonator design as well as the slow-wave open-loop resonator structure. In this way, very compact circuit size as well as very wide upper stopband can be achieved and realized in a Roger 4003C substrate. On the other hand, filters constructed with integrated circuit technology become more attractive for enabling the integration of the microwave system on a single chip (SOC). To examine the performance of this design structure at the integrated circuit, the filter is fabricated by the 0.15 μm pHEMT GaAs integrated circuit process. This pHEMT process can also provide a much better circuit performance for high frequency designs than those made on a PCB board. The design example was implemented in GaAs with center frequency at 13.5 GHz to examine the performance in higher frequency in detail. The occupied area is only about 1.09×0.97 mm2. The ADS software is used to design those modified filters to suppress the first and second harmonics.

Keywords: microstrip resonator, bandpass filter, harmonic suppression, GaAs

Procedia PDF Downloads 308
501 A Brief Review on the Relationship between Pain and Sociology

Authors: Hanieh Sakha, Nader Nader, Haleh Farzin

Abstract:

Introduction: Throughout history, pain theories have been supposed by biomedicine, especially regarding its diagnosis and treatment aspects. Therefore, the feeling of pain is not only a personal experience and is affected by social background; therefore, it involves extensive systems of signals. The challenges in emotional and sentimental dimensions of pain originate from scientific medicine (i.e., the dominant theory is also referred to as the specificity theory); however, this theory has accepted some alterations by emerging physiology. Then, Von Frey suggested the theory of cutaneous senses (i.e., Muller’s concept: the common sensation of combined four major skin receptors leading to a proper sensation) 50 years after the specificity theory. The pain pathway was composed of spinothalamic tracts and thalamus with an inhibitory effect on the cortex. Pain is referred to as a series of unique experiences with various reasons and qualities. Despite the gate control theory, the biological aspect overcomes the social aspect. Vrancken provided a more extensive definition of pain and found five approaches: Somatico-technical, dualistic body-oriented, behaviorist, phenomenological, and consciousness approaches. The Western model combined physical, emotional, and existential aspects of the human body. On the other hand, Kotarba felt confused about the basic origins of chronic pain. Freund demonstrated and argued with Durkhemian about the sociological approach to emotions. Lynch provided a piece of evidence about the correlation between cardiovascular disease and emotionally life-threatening occurrences. Helman supposed a distinction between private and public pain. Conclusion: The consideration of the emotional aspect of pain could lead to effective, emotional, and social responses to pain. On the contrary, the theory of embodiment is based on the sociological view of health and illness. Social epidemiology shows an imbalanced distribution of health, illness, and disability among various social groups. The social support and socio-cultural level can result in several types of pain. It means the status of athletes might define their pain experiences. Gender is one of the important contributing factors affecting the type of pain (i.e., females are more likely to seek health services for pain relief.) Chronic non-cancer pain (CNCP) has become a serious public health issue affecting more than 70 million people globally. CNCP is a serious public health issue which is caused by the lack of awareness about chronic pain management among the general population.

Keywords: pain, sociology, sociological, body

Procedia PDF Downloads 42
500 Analyzing Brand Related Information Disclosure and Brand Value: Further Empirical Evidence

Authors: Yves Alain Ach, Sandra Rmadi Said

Abstract:

An extensive review of literature in relation to brands has shown that little research has focused on the nature and determinants of the information disclosed by companies with respect to the brands they own and use. The objective of this paper is to address this issue. More specifically, the aim is to characterize the nature of the information disclosed by companies in terms of estimating the value of brands and to identify the determinants of that information according to the company’s characteristics most frequently tested by previous studies on the disclosure of information on intangible capital, by studying the practices of a sample of 37 French companies. Our findings suggest that companies prefer to communicate accounting, economic and strategic information in relation to their brands instead of providing financial information. The analysis of the determinants of the information disclosed on brands leads to the conclusion that the groups which operate internationally and have chosen a category 1 auditing firm to communicate more information to investors in their annual report. Our study points out that the sector is not an explanatory variable for voluntary brand disclosure, unlike previous studies on intangible capital. Our study is distinguished by the study of an element that has been little studied in the financial literature, namely the determinants of brand-related information. With regard to the effect of size on brand-related information disclosure, our research does not confirm this link. Many authors point out that large companies tend to publish more voluntary information in order to respond to stakeholder pressure. Our study also establishes that the relationship between brand information supply and performance is insignificant. This relationship is already controversial by previous research, and it shows that higher profitability motivates managers to provide more information, as this strengthens investor confidence and may increase managers' compensation. Our main contribution focuses on the nature of the inherent characteristics of the companies that disclose the most information about brands. Our results show the absence of a link between size and industry on the one hand and the supply of brand information on the other, contrary to previous research. Our analysis highlights three types of information disclosed about brands: accounting, economics and strategy. We, therefore, question the reasons that may lead companies to voluntarily communicate mainly accounting, economic and strategic information in relation to our study from one year to the next and not to communicate detailed information that would allow them to reconstitute the financial value of their brands. Our results can be useful for companies and investors. Our results highlight, to our surprise, the lack of financial information that would allow investors to understand a better valuation of brands. We believe that additional information is needed to improve the quality of accounting and financial information related to brands. The additional information provided in the special report that we recommend could be called a "report on intangible assets”.

Keywords: brand related information, brand value, information disclosure, determinants

Procedia PDF Downloads 55
499 Making Unorganized Social Groups Responsible for Climate Change: Structural Analysis

Authors: Vojtěch Svěrák

Abstract:

Climate change ethics have recently shifted away from individualistic paradigms towards concepts of shared or collective responsibility. Despite this evolving trend, a noticeable gap remains: a lack of research exclusively addressing the moral responsibility of specific unorganized social groups. The primary objective of the article is to fill this gap. The article employs the structuralist methodological approach proposed by some feminist philosophers, utilizing structural analysis to explain the existence of social groups. The argument is made for the integration of this framework with the so-called forward-looking Social Connection Model (SCM) of responsibility, which ascribes responsibilities to individuals based on their participation in social structures. The article offers an extension of this model to justify the responsibility of unorganized social groups. The major finding of the study is that although members of unorganized groups are loosely connected, collectively they instantiate specific external social structures, share social positioning, and the notion of responsibility could be based on that. Specifically, if the structure produces harm or perpetuates injustices, and the group both benefits from and possesses the capacity to significantly influence the structure, a greater degree of responsibility should be attributed to the group as a whole. This thesis is applied and justified within the context of climate change, based on the asymmetrical positioning of different social groups. Climate change creates a triple inequality: in contribution, vulnerability, and mitigation. The study posits that different degrees of group responsibility could be drawn from these inequalities. Two social groups serve as a case study for the article: first, the Pakistan lower class, consisting of people living below the national poverty line, with a low greenhouse gas emissions rate, severe climate change-related vulnerability due to the lack of adaptation measures, and with very limited options to participate in the mitigation of climate change. Second, the so-called polluter elite, defined by members' investments in polluting companies and high-carbon lifestyles, thus with an interest in the continuation of structures leading to climate change. The first identified group cannot be held responsible for climate change, but their group interest lies in structural change and should be collectively maintained. On the other hand, the responsibility of the second identified group is significant and can be fulfilled by a justified demand for some political changes. The proposed approach of group responsibility is suggested to help navigate climate justice discourse and environmental policies, thus helping with the sustainability transition.

Keywords: collective responsibility, climate justice, climate change ethics, group responsibility, social ontology, structural analysis

Procedia PDF Downloads 34
498 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 155
497 The Incoherence of the Philosophers as a Defense of Philosophy against Theology

Authors: Edward R. Moad

Abstract:

Al-Ghazali’s Tahāfat al Falāsifa is widely construed as an attack on philosophy in favor of theological fideism. Consequently, he has been blamed for ‘death of philosophy’ in the Muslim world. ‘Falsifa’ however is not philosophy itself, but rather a range of philosophical doctrines mainly influenced by or inherited form Greek thought. In these terms, this work represents a defense of philosophy against what we could call ‘falsifical’ fideism. In the introduction, Ghazali describes his target audience as, not the falasifa, but a group of pretenders engaged in taqlid to a misconceived understanding of falasifa, including the belief that they were capable of demonstrative certainty in the field of metaphysics. He promises to use falsifa standards of logic (with which he independently agrees), to show that that the falasifa failed to demonstratively prove many of their positions. Whether or not he succeeds in that, the exercise of subjecting alleged proofs to critical scrutiny is quintessentially philosophical, while uncritical adherence to a doctrine, in the name of its being ‘philosophical’, is decidedly unphilosophical. If we are to blame the intellectual decline of the Muslim world on someone’s ‘bad’ way of thinking, rather than more material historical circumstances (which is already a mistake), then blame more appropriately rests with modernist Muslim thinkers who, under the influence of orientalism (and like Ghazali’s philosophical pretenders) mistook taqlid to the falasifa as philosophy itself. The discussion of the Tahāfut takes place in the context of an epistemic (and related social) hierarchy envisioned by the falasifa, corresponding to the faculties of the sense, the ‘estimative imagination’ (wahm), and the pure intellect, along with the respective forms of discourse – rhetoric, dialectic, and demonstration – appropriate to each category of that order. Al-Farabi in his Book of Letters describes a relation between dialectic and demonstration on the one hand, and theology and philosophy on the other. The latter two are distinguished by method rather than subject matter. Theology is that which proceeds dialectically, while philosophy is (or aims to be?) demonstrative. Yet, Al-Farabi tells us, dialectic precedes philosophy like ‘nourishment for the tree precedes its fruit.’ That is, dialectic is part of the process, by which we interrogate common and imaginative notions in the pursuit of clearly understood first principles that we can then deploy in the demonstrative argument. Philosophy is, therefore, something we aspire to through, and from a discursive condition of, dialectic. This stands in apparent contrast to the understanding of Ibn Sina, for whom one arrives at the knowledge of first principles through contact with the Active Intellect. It also stands in contrast to that of Ibn Rushd, who seems to think our knowledge of first principles can only come through reading Aristotle. In conclusion, based on Al-Farabi’s framework, Ghazali’s Tahafut is a truly an exercise in philosophy, and an effort to keep the door open for true philosophy in the Muslim mind, against the threat of a kind of developing theology going by the name of falsifa.

Keywords: philosophy, incoherence, theology, Tahafut

Procedia PDF Downloads 136