Search results for: microbial contribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3242

Search results for: microbial contribution

212 Examining the Influence of Firm Internal Level Factors on Performance Variations among Micro and Small Enterprises: Evidence from Tanzanian Agri-Food Processing Firms

Authors: Pulkeria Pascoe, Hawa P. Tundui, Marcia Dutra de Barcellos, Hans de Steur, Xavier Gellynck

Abstract:

A majority of Micro and Small Enterprises (MSEs) experience low or no growth. Understanding their performance remains unfinished and disjointed as there is no consensus on the factors influencing it, especially in developing countries. Using a Resource-Based View (RBV) as the theoretical background, this cross-sectional study employed four regression models to examine the influence of firm-level factors (firm-specific characteristics, firm resources, manager socio-demographic characteristics, and selected management practices) on the overall performance variations among 442 Tanzanian micro and small agri-food processing firms. Study results confirmed the RBV argument that intangible resources make a larger contribution to overall performance variations among firms than that tangible resources. Firms' tangible and intangible resources explained 34.5% of overall performance variations (intangible resources explained the overall performance variability by 19.4% compared to tangible resources, which accounted for 15.1%), ranking first in explaining the overall performance variance. Firm-specific characteristics ranked second by influencing variations in overall performance by 29.0%. Selected management practices ranked third (6.3%), while the manager's socio-demographic factors were last on the list, as they influenced the overall performance variability among firms by only 5.1%. The study also found that firms that focus on proper utilization of tangible resources (financial and physical), set targets, and undertake better working capital management practices performed higher than their counterparts (low and average performers). Furthermore, accumulation and proper utilization of intangible resources (relational, organizational, and reputational), undertaking performance monitoring practices, age of the manager, and the choice of the firm location and activity were the dominant significant factors influencing the variations among average and high performers, relative to low performers. The entrepreneurial background was a significant factor influencing variations in average and low-performing firms, indicating that entrepreneurial skills are crucial to achieving average levels of performance. Firm age, size, legal status, source of start-up capital, gender, education level, and total business experience of the manager were not statistically significant variables influencing the overall performance variations among the agri-food processors under the study. The study has identified both significant and non-significant factors influencing performance variations among low, average, and high-performing micro and small agri-food processing firms in Tanzania. Therefore, results from this study will help managers, policymakers and researchers to identify areas where more attention should be placed in order to improve overall performance of MSEs in agri-food industry.

Keywords: firm-level factors, micro and small enterprises, performance, regression analysis, resource-based-view

Procedia PDF Downloads 62
211 Controlled Synthesis of Pt₃Sn-SnOx/C Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells

Authors: Dorottya Guban, Irina Borbath, Istvan Bakos, Peter Nemeth, Andras Tompos

Abstract:

One of the greatest challenges of the implementation of polymer electrolyte membrane fuel cells (PEMFCs) is to find active and durable electrocatalysts. The cell performance is always limited by the oxygen reduction reaction (ORR) on the cathode since it is at least 6 orders of magnitude slower than the hydrogen oxidation on the anode. Therefore high loading of Pt is required. Catalyst corrosion is also more significant on the cathode, especially in case of mobile applications, where rapid changes of loading have to be tolerated. Pt-Sn bulk alloys and SnO2-decorated Pt3Sn nanostructures are among the most studied bimetallic systems for fuel cell applications. Exclusive formation of supported Sn-Pt alloy phases with different Pt/Sn ratios can be achieved by using controlled surface reactions (CSRs) between hydrogen adsorbed on Pt sites and tetraethyl tin. In this contribution our results for commercial and a home-made 20 wt.% Pt/C catalysts modified by tin anchoring via CSRs are presented. The parent Pt/C catalysts were synthesized by modified NaBH4-assisted ethylene-glycol reduction method using ethanol as a solvent, which resulted either in dispersed and highly stable Pt nanoparticles or evenly distributed raspberry-like agglomerates according to the chosen synthesis parameters. The 20 wt.% Pt/C catalysts prepared that way showed improved electrocatalytic performance in the ORR and stability in comparison to the commercial 20 wt.% Pt/C catalysts. Then, in order to obtain Sn-Pt/C catalysts with Pt/Sn= 3 ratio, the Pt/C catalysts were modified with tetraethyl tin (SnEt4) using three and five consecutive tin anchoring periods. According to in situ XPS studies in case of catalysts with highly dispersed Pt nanoparticles, pre-treatment in hydrogen even at 170°C resulted in complete reduction of the ionic tin to Sn0. No evidence of the presence of SnO2 phase was found by means of the XRD and EDS analysis. These results demonstrate that the method of CSRs is a powerful tool to create Pt-Sn bimetallic nanoparticles exclusively, without tin deposition onto the carbon support. On the contrary, the XPS results revealed that the tin-modified catalysts with raspberry-like Pt agglomerates always contained a fraction of non-reducible tin oxide. At the same time, they showed increased activity and long-term stability in the ORR than Pt/C, which was assigned to the presence of SnO2 in close proximity/contact with Pt-Sn alloy phase. It has been demonstrated that the content and dispersion of the fcc Pt3Sn phase within the electrocatalysts can be controlled by tuning the reaction conditions of CSRs. The bimetallic catalysts displayed an outstanding performance in the ORR. The preparation of a highly dispersed 20Pt/C catalyst permits to decrease the Pt content without relevant decline in the electrocatalytic performance of the catalysts.

Keywords: anode catalyst, cathode catalyst, controlled surface reactions, oxygen reduction reaction, PtSn/C electrocatalyst

Procedia PDF Downloads 207
210 Evaluation of Mixing and Oxygen Transfer Performances for a Stirred Bioreactor Containing P. chrysogenum Broths

Authors: A. C. Blaga, A. Cârlescu, M. Turnea, A. I. Galaction, D. Caşcaval

Abstract:

The performance of an aerobic stirred bioreactor for fungal fermentation was analyzed on the basis of mixing time and oxygen mass transfer coefficient, by quantifying the influence of some specific geometrical and operational parameters of the bioreactor, as well as the rheological behavior of Penicillium chrysogenum broth (free mycelia and mycelia aggregates). The rheological properties of the fungus broth, controlled by the biomass concentration, its growth rate, and morphology strongly affect the performance of the bioreactor. Experimental data showed that for both morphological structures the accumulation of fungus biomass induces a significant increase of broths viscosity and modifies the rheological behavior. For lower P. chrysogenum concentrations (both morphological conformations), the mixing time initially increases with aeration rate, reaches a maximum value and decreases. This variation can be explained by the formation of small bubbles, due to the presence of solid phase which hinders the bubbles coalescence, the rising velocity of bubbles being reduced by the high apparent viscosity of fungus broths. By biomass accumulation, the variation of mixing time with aeration rate is gradually changed, the continuous reduction of mixing time with air input flow increase being obtained for 33.5 g/l d.w. P. chrysogenum. Owing to the superior apparent viscosity, which reduces considerably the relative contribution of mechanical agitation to the broths mixing, these phenomena are more pronounced for P. chrysogenum free mycelia. Due to the increase of broth apparent viscosity, the biomass accumulation induces two significant effects on oxygen transfer rate: the diminution of turbulence and perturbation of bubbles dispersion - coalescence equilibrium. The increase of P. chrysogenum free mycelia concentration leads to the decrease of kla values. Thus, for the considered variation domain of the main parameters taken into account, namely air superficial velocity from 8.36 10-4 to 5.02 10-3 m/s and specific power input from 100 to 500 W/m3, kla was reduced for 3.7 times for biomass concentration increase from 4 to 36.5 g/l d.w. The broth containing P. crysogenum mycelia aggregates exhibits a particular behavior from the point of view of oxygen transfer. Regardless of bioreactor operating conditions, the increase of biomass concentration leads initially to the increase of oxygen mass transfer rate, the phenomenon that can be explained by the interaction of pellets with bubbles. The results are in relation with the increase of apparent viscosity of broths corresponding to the variation of biomass concentration between the mentioned limits. Thus, the apparent viscosity of the suspension of fungus mycelia aggregates increased for 44.2 times and fungus free mycelia for 63.9 times for CX increase from 4 to 36.5 g/l d.w. By means of the experimental data, some mathematical correlations describing the influences of the considered factors on mixing time and kla have been proposed. The proposed correlations can be used in bioreactor performance evaluation, optimization, and scaling-up.

Keywords: biomass concentration, mixing time, oxygen mass transfer, P. chrysogenum broth, stirred bioreactor

Procedia PDF Downloads 308
209 In Response to Worldwide Disaster: Academic Libraries’ Functioning During COVID-19 Pandemic Without a Policy

Authors: Dalal Albudaiwi, Mike Allen, Talal Alhaji, Shahnaz Khadimehzadah

Abstract:

As a pandemic, COVID-19 has impacted the whole world since November 2019. In other words, every organization, industry, and institution has been negatively affected by the Coronavirus. The uncertainty of how long the pandemic will last caused chaos at all levels. As with any other institution, public libraries were affected and transmitted into online services and resources. As internationally, have been witnessed that some public libraries were well-prepared for such disasters as the pandemic, and therefore, collections, users, services, technologies, staff, and budgets were all influenced. Public libraries’ policies did not mention any plan regarding such a pandemic. Instead, there are several rules in the guidelines about disasters in general, such as natural disasters. In this pandemic situation, libraries have been involved in different uneasy circumstances. However, it has always been apparent to public libraries the role they play in serving their communities in excellent and critical times. It dwells into the traditional role public libraries play in providing information services and sources to satisfy their information-based community needs. Remarkably increasing people’s awareness of the importance of informational enrichment and enhancing society’s skills in dealing with information and information sources. Under critical circumstances, libraries play a different role. It goes beyond the traditional part of information providers to the untraditional role of being a social institution that serves the community with whatever capabilities they have. This study takes two significant directions. The first focuses on investigating how libraries have responded to COVID-19 and how they manage disasters within their organization. The second direction focuses on how libraries help their communities to act during disasters and how to recover from the consequences. The current study examines how libraries prepare for disasters and the role of public libraries during disasters. We will also propose “measures” to be a model that libraries can use to evaluate the effectiveness of their response to disasters. We intend to focus on how libraries responded to this new disaster. Therefore, this study aims to develop a comprehensive policy that includes responding to a crisis such as Covid-19. An analytical lens inside the libraries as an organization and outside the organization walls will be documented based on analyzing disaster-related literature published in the LIS publication. The study employs content analysis (CA) methodology. CA is widely used in the library and information science. The critical contribution of this work is to propose solutions it provides to libraries and planers to prepare crisis management plans/ policies, specifically to face a new global disaster such as the COVID-19 pandemic. Moreover, the study will help library directors to evaluate their strategies and to improve them properly. The significance of this study lies in guiding libraries’ directors to enhance the goals of the libraries to guarantee crucial issues such as: saving time, avoiding loss, saving budget, acting quickly during a crisis, maintaining libraries’ role during pandemics, finding out the best response to disasters, and creating plan/policy as a sample for all libraries.

Keywords: Covid-19, policy, preparedness, public libraries

Procedia PDF Downloads 57
208 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers

Authors: Navah Z. Ratzon, Rachel Shichrur

Abstract:

Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.

Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention

Procedia PDF Downloads 324
207 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements

Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga

Abstract:

Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.

Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform

Procedia PDF Downloads 363
206 (De)Motivating Mitigation Behavior: An Exploratory Framing Study Applied to Sustainable Food Consumption

Authors: Youval Aberman, Jason E. Plaks

Abstract:

This research provides initial evidence that self-efficacy of mitigation behavior – the belief that one’s action can make a difference on the environment – can be implicitly inferred from the way numerical information is presented in environmental messages. The scientific community sees climate change as a pressing issue, but the general public tends to construe climate change as an abstract phenomenon that is psychologically distant. As such, a main barrier to pro-environmental behavior is that individuals often believe that their own behavior makes little to no difference on the environment. When it comes to communicating how the behavior of billions of individuals affects global climate change, it might appear valuable to aggregate those billions and present the shocking enormity of the resources individuals consume. This research provides initial evidence that, in fact, this strategy is ineffective; presenting large-scale aggregate data dilutes the contribution of the individual and impedes individuals’ motivation to act pro-environmentally. The high-impact, underrepresented behavior of eating a sustainable diet was chosen for the present studies. US Participants (total N = 668) were recruited online for a study on ‘meat and the environment’ and received information about some of resources used in meat production – water, CO2e, and feed – with numerical information that varied in its frame of reference. A ‘Nation’ frame of reference discussed the resources used in the beef industry, such as the billions of CO2e released daily by the industry, while a ‘Meal’ frame of reference presented the resources used in the production of a single beef dish. Participants completed measures of pro-environmental attitudes and behavioral intentions, either immediately (Study 1) or two days (Study 2) after reading the information. In Study 2 (n = 520) participants also indicated whether they consumed less or more meat than usual. Study 2 included an additional control condition that contained no environmental data. In Study 1, participants who read about meat production at a national level, compared to at a meal level, reported lower motivation to make ecologically conscious dietary choices and reported lower behavioral intention to change their diet. In Study 2, a similar pattern emerged, with the added insight that the Nation condition, but not the Meal condition, deviated from the control condition. Participants across conditions, on average, reduced their meat consumption in the duration of Study 2, except those in the Nation condition who remained unchanged. Presenting nation-wide consequences of human behavior is a double-edged sword: Framing in a large scale might reveal the relationship between collective actions and environmental issues, but it hinders the belief that individual actions make a difference.

Keywords: climate change communication, environmental concern, meat consumption, motivation

Procedia PDF Downloads 142
205 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies

Authors: Roberta Martino, Viviana Ventre

Abstract:

Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.

Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty

Procedia PDF Downloads 101
204 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group

Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb

Abstract:

Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.

Keywords: broadband services, customer experience quality, loyalty, net promoters score

Procedia PDF Downloads 246
203 How to Assess the Attractiveness of Business Location According to the Mainstream Concepts of Comparative Advantages

Authors: Philippe Gugler

Abstract:

Goal of the study: The concept of competitiveness has been addressed by economic theorists and policymakers for several hundreds of years, with both groups trying to understand the drivers of economic prosperity and social welfare. The goal of this contribution is to address the major useful theoretical contributions that permit to identify the main drivers of a territory’s competitiveness. We first present the major contributions found in the classical and neo-classical theories. Then, we concentrate on two majors schools providing significant thoughts on the competitiveness of locations: the Economic Geography (EG) School and the International Business (IB) School. Methodology: The study is based on a literature review of the classical and neo-classical theories, on the economic geography theories and on the international business theories. This literature review establishes links between these theoretical mainstreams. This work is based on the academic framework establishing a meaningful literature review aimed to respond to our research question and to develop further research in this field. Results: The classical and neo-classical pioneering theories provide initial insights that territories are different and that these differences explain the discrepancies in their levels of prosperity and standards of living. These theories emphasized different factors impacting the level and the growth of productivity in a given area and therefore the degree of their competitiveness. However, these theories are not sufficient to more precisely identify the drivers and enablers of location competitiveness and to explain, in particular, the factors that drive the creation of economic activities, the expansion of economic activities, the creation of new firms and the attraction of foreign firms. Prosperity is due to economic activities created by firms. Therefore, we need more theoretical insights to scrutinize the competitive advantages of territories or, in other words, their ability to offer the best conditions that enable economic agents to achieve higher rates of productivity in open markets. Two major theories provide, to a large extent, the needed insights: the economic geography theory and the international business theory. The economic geography studies scrutinized in this study from Marshall to Porter, aim to explain the drivers of the concentration of specific industries and activities in specific locations. These activity agglomerations may be due to the creation of new enterprises, the expansion of existing firms, and the attraction of firms located elsewhere. Regarding this last possibility, the international business (IB) theories focus on the comparative advantages of locations as far as multinational enterprises (MNEs) strategies are concerned. According to international business theory, the comparative advantages of a location serves firms not only by exploiting their ownership advantages (mostly as far as market seeking, resource seeking and efficiency seeking investments are concerned) but also by augmenting and/or creating new ownership advantages (strategic asset seeking investments). The impact of a location on the competitiveness of firms is considered from both sides: the MNE’s home country and the MNE’s host country.

Keywords: competitiveness, economic geography, international business, attractiveness of businesses

Procedia PDF Downloads 117
202 Constructing and Circulating Knowledge in Continuous Education: A Study of Norwegian Educational-Psychological Counsellors' Reflection Logs in Post-Graduate Education

Authors: Moen Torill, Rismark Marit, Astrid M. Solvberg

Abstract:

In Norway, every municipality shall provide an educational psychological service, EPS, to support kindergartens and schools in their work with children and youths with special needs. The EPS focus its work on individuals, aiming to identify special needs and to give advice to teachers and parents when they ask for it. In addition, the service also give priority to prevention and system intervention in kindergartens and schools. To master these big tasks university courses are established to support EPS counsellors' continuous learning. There is, however, a need for more in-depth and systematic knowledge on how they experience the courses they attend. In this study, EPS counsellors’ reflection logs during a particular course are investigated. The research question is: what are the content and priorities of the reflections that are communicated in the logs produced by the educational psychological counsellors during a post-graduate course? The investigated course is a credit course organized over a one-year period in two one-semester modules. The altogether 55 students enrolled in the course work as EPS counsellors in various municipalities across Norway. At the end of each day throughout the course period, the participants wrote reflection logs about what they had experienced during the day. The data material consists of 165 pages of typed text. The collaborating researchers studied the data material to ascertain, differentiate and understand the meaning of the content in each log. The analysis also involved the search for similarity in content and development of analytical categories that described the focus and primary concerns in each of the written logs. This involved constant 'critical and sustained discussions' for mutual construction of meaning between the co-researchers in the developing categories. The process is inspired by Grounded Theory. This means that the concepts developed during the analysis derived from the data material and not chosen prior to the investigation. The analysis revealed that the concept 'Useful' frequently appeared in the participants’ reflections and, as such, 'Useful' serves as a core category. The core category is described through three major categories: (1) knowledge sharing (concerning direct and indirect work with students with special needs) with colleagues is useful, (2) reflections on models and theoretical concepts (concerning students with special needs) are useful, (3) reflection on the role as EPS counsellor is useful. In all the categories, the notion of useful occurs in the participants’ emphasis on and acknowledgement of the immediate and direct link between the university course content and their daily work practice. Even if each category has an importance and value of its own, it is crucial that they are understood in connection with one another and as interwoven. It is the connectedness that gives the core category an overarching explanatory power. The knowledge from this study may be a relevant contribution when it comes to designing new courses that support continuing professional development for EPS counsellors, whether for post-graduate university courses or local courses at the EPS offices or whether in Norway or other countries in the world.

Keywords: constructing and circulating knowledge, educational-psychological counsellor, higher education, professional development

Procedia PDF Downloads 91
201 For Whom Is Legal Aid: A Critical Analysis of the State-Funded Legal Aid in Criminal Cases in Tajikistan

Authors: Umeda Junaydova

Abstract:

Legal aid is a key element of access to justice. According to UN Principles and Guidelines on Access to Legal Aid in Criminal Justice Systems, state members bear the obligation to put in place accessible, effective, sustainable, and credible legal aid systems. Regarding this obligation, developing countries, such as Tajikistan, faced challenges in terms of financing this system. Thus, many developed nations have launched rule-of-law programs to support these states and ensure access to justice for all. Following independence from the Soviet Union, Tajikistan committed to introducing the rule of law and providing access to justice. This newly established country was weak, and the sudden outbreak of civil war aggravated the situation even more. The country needed external support and opened its door to attract foreign donors to assist it in its way to development. In 2015, Tajikistan, with the financial support of development partners, was able to establish a state-funded legal aid system that provides legal assistance to vulnerable and marginalized populations, including in criminal cases. In the beginning, almost the whole system was financed from donor funds; by that time, the contribution of the government gradually increased, and currently, it covers 80% of the total budget. All these governments' actions toward ensuring access to criminal legal aid for disadvantaged groups look promising; however, the reality is completely different. Currently, not all disadvantaged people are covered by these services, and their cases are most of the time considered without appropriate defense, which leads to violation of fundamental human rights. This research presents a comprehensive exploration of the interplay between donor assistance and the effectiveness of legal aid services in Tajikistan, with a specific focus on criminal cases involving vulnerable groups, such as women and children. In the context of Tajikistan, this study addresses a pressing concern: despite substantial financial support from international donors, state-funded legal aid services often fall short of meeting the needs of poor and vulnerable populations. The study delves into the underlying complexities of this issue and examines the structural, operational, and systemic challenges faced by legal aid providers, shedding light on the factors contributing to the ineffectiveness of legal aid services. Furthermore, it seeks to identify the root causes of these issues, revealing the barriers that hinder the delivery of adequate legal aid services. The research adopts a socio-legal methodology to ensure an appropriate combination of multiple methodologies. The findings of this research hold significant implications for both policymakers and practitioners, offering insights into the enhancement of legal aid services and access to justice for disadvantaged and marginalized populations in Tajikistan. By addressing these pressing questions, this study aims to fill the gap in legal literature and contribute to the development of a more equitable and efficient legal aid system that better serves the needs of the most vulnerable members of society.

Keywords: access to justice, legal aid, rule of law, rights for council

Procedia PDF Downloads 25
200 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 297
199 Spatio-Temporal Variation of Gaseous Pollutants and the Contribution of Particulate Matters in Chao Phraya River Basin, Thailand

Authors: Samart Porncharoen, Nisa Pakvilai

Abstract:

The elevated levels of air pollutants in regional atmospheric environments is a significant problem that affects human health in Thailand, particularly in the Chao Phraya River Basin. Of concern are issues surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the river. Therefore, the spatio-temporal study of air pollution in this real environment can gain more accurate air quality data for making formalized environmental policy in river basins. In order to inform such a policy, a study was conducted over a period of January –December, 2015 to continually collect measurements of various pollutants in both urban and regional locations in the Chao Phraya River Basin. This study investigated the air pollutants in many diverse environments along the Chao Phraya River Basin, Thailand in 2015. Multivariate Analysis Techniques such as Principle Component Analysis (PCA) and Path analysis were utilised to classify air pollution in the surveyed location. Measurements were collected in both urban and rural areas to see if significant differences existed between the two locations in terms of air pollution levels. The meteorological parameters of various particulates were collected continually from a Thai pollution control department monitoring station over a period of January –December, 2015. Of interest to this study were the readings of SO2, CO, NOx, O3, and PM10. Results showed a daily arithmetic mean concentration of SO2, CO, NOx, O3, PM10 reading at 3±1 ppb, 0.5± 0.5 ppm, 30±21 ppb, 19±16 ppb, and 40±20 ug/m3 in urban locations (Bangkok). During the same time period, the readings for the same measurements in rural areas, Ayutthaya (were 1±0.5 ppb, 0.1± 0.05 ppm, 25±17 ppb, 30±21 ppb, and 35±10 ug/m3respectively. This show that Bangkok were located in highly polluted environments that are dominated source emitted from vehicles. Further, results were analysed to ascertain if significant seasonal variation existed in the measurements. It was found that levels of both gaseous pollutants and particle matter in dry season were higher than the wet season. More broadly, the results show that levels of pollutants were measured highest in locations along the Chao Phraya. River Basin known to have a large number of vehicles and biomass burning. This correlation suggests that the principle pollutants were from these anthropogenic sources. This study contributes to the body of knowledge surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the Chao Phraya River Basin. Further, this study is one of the first to utilise continuous mobile monitoring along a river in order to gain accurate measurements during a data collection period. Overall, the results of this study can be used for making formalized environmental policy in river basins in order to reduce the physical effects on human health.

Keywords: air pollution, Chao Phraya river basin, meteorology, seasonal variation, principal component analysis

Procedia PDF Downloads 254
198 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 243
197 The Burmese Exodus of 1942: Towards Evolving Policy Protocols for a Refugee Archive

Authors: Vinod Balakrishnan, Chrisalice Ela Joseph

Abstract:

The Burmese Exodus of 1942, which left more than 4 lakh as refugees and thousands dead, is one of the worst forced migrations in recorded history. Adding to the woes of the refugees is the lack of credible documentation of their lived experiences, trauma, and stories and their erasure from recorded history. Media reports, national records, and mainstream narratives that have registered the exodus provide sanitized versions which have reduced the refugees to a nameless, faceless mass of travelers and obliterated their lived experiences, trauma, and sufferings. This attitudinal problem compels the need to stem the insensitivity that accompanies institutional memory by making a case for a more humanistically evolved policy that puts in place protocols for the way the humanities would voice the concern for the refugee. A definite step in this direction and a far more relevant project in our times is the need to build a comprehensive refugee archive that can be a repository of the refugee experiences and perspectives. The paper draws on Hannah Arendt’s position on the Jewish refugee crisis, Agamben’s work on statelessness and citizenship, Foucault’s notion of governmentality and biopolitics, Edward Said’s concepts on Exile, Fanon’s work on the dispossessed, Derrida’s work on ‘the foreigner and hospitality’ in order to conceptualize the refugee condition which will form the theoretical framework for the paper. It also refers to the existing scholarship in the field of refugee studies such as Roger Zetter’s work on the ‘refugee label’, Philip Marfleet’s work on ‘refugees and history’, Lisa Malkki’s research on the anthropological discourse of the refugee and refugee studies. The paper is also informed by the work that has been done by the international organizations to address the refugee crisis. The emphasis is on building a strong argument for the establishment of the refugee archive that finds but a passing and a none too convincing reference in refugee studies in order to enable a multi-dimensional understanding of the refugee crisis. Some of the old questions cannot be dismissed as outdated as the continuing travails of the refugees in different parts of the world only remind us that they are still, largely, unanswered. The questions are -What is the nature of a Refugee Archive? How is it different from the existing historical and political archives? What are the implications of the refugee archive? What is its contribution to refugee studies? The paper draws on Diana Taylor’s concept of the archive and the repertoire to theorize the refugee archive as a repository that has the documentary function of the ‘archive’ and the ‘agency’ function of the repertoire. It then reads Ayya’s Accounts- a memoir by Anand Pandian -in the light of Hannah Arendt’s concepts of the ‘refugee as vanguard’ and ‘story telling as political action’- to illustrate how the memoir contributes to the refugee archive that provides the refugee a place and agency in history. The paper argues for a refugee archive that has implications for the formulation of inclusive refugee policies.

Keywords: Ayya’s Accounts, Burmese Exodus, policy protocol, refugee archive

Procedia PDF Downloads 114
196 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 115
195 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites

Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu

Abstract:

Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.

Keywords: graphene, graphene oxide, mechanical properties, dopping effect

Procedia PDF Downloads 288
194 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 55
193 Biostratigraphic Significance of Shaanxilithes ningqiangensis from the Tal Group (Cambrian), Nigalidhar Syncline, Lesser Himalaya, India and Its GC-MS Analysis

Authors: C. A. Sharma, Birendra P. Singh

Abstract:

We recovered 40 well preserved ribbon-shaped, meandering specimens of S. ningqiangensis from the Earthy Dolomite Member (Krol Group) and calcareous siltstone beds of the Earthy Siltstone Member (Tal Group) showing closely spaced annulations that lacked branching. The beginning and terminal points are indistinguishable. In certain cases, individual specimens are characterized by irregular, low-angle to high-angle sinuosity. It has been variously described as body fossil, ichnofossil and algae. Detailed study of this enigmatic fossil is needed to resolve the long standing controversy regarding its phylogenetic and stratigraphic placements, which will be an important contribution to the evolutionary history of metazoans. S. ningqiangensis has been known from the late Neoproterozoic (Ediacaran) of southern and central China (Sichuan, Shaanxi, Quinghai and Guizhou provinces and Ningxia Hui Autonomous region), Siberian platform and across Pc/C Boundary from latest Neoprterozoic to earliest Cambrian of northern India. Shaanxilithes is considered an Ediacaran organism that spans the Precambrian–Cambrian boundary, an interval marked by significant taphonomic and ecological transformations that include not only innovation but also probable extinction. All the past well constrained finds of S. ningqiangensis are restricted to Ediacaran age. However, due to the new recoveries of the fossil from Nigalidhar Syncline, the stratigraphic status of S. ningqiangensis-bearing Earthy Siltstone Member of the Shaliyan Formation of the Tal Group (Cambrian) is rendered uncertain, though the overlying Chert Member in the adjoining Korgai Syncline has yielded definite early Cambrian acritarchs. The moot question is whether the Earthy Siltstone Member represents an Ediacaran or an early Cambrian age?. It would be interesting to find if Shaanxilithes, so far known from Ediacaran sequences, could it transgress to the early Cambrian or in simple words could it withstand the Pc/C Boundary event? GC-MS data shows the S. ningqiangensis structure is formed by hydrocarbon organic compounds which are filled with inorganic elements filler like silica, Calcium, phosphorus etc. The S. ningqiangensis structure is a mixture of organic compounds of high molecular weight, containing several saturated rings with hydrocarbon chains having an occasional isolated carbon-carbon double bond and also containing, in addition, to small amounts of nitrogen, sulfur and oxygen. Data also revealed that the presence of nitrogen which would be either in the form of peptide chains means amide/amine or chemical form i.e. nitrates/nitrites etc. The formula weight and the weight ratio of C/H shows that it would be expected for algae derived organics, since algae produce fatty acids as well as other hydrocarbons such as cartenoids.

Keywords: GC-MS Analysis, lesser himalaya, Pc/C Boundary, shaanxilithes

Procedia PDF Downloads 231
192 Pioneering Conservation of Aquatic Ecosystems under Australian Law

Authors: Gina M. Newton

Abstract:

Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.

Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species

Procedia PDF Downloads 111
191 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 19
190 Variability Studies of Seyfert Galaxies Using Sloan Digital Sky Survey and Wide-Field Infrared Survey Explorer Observations

Authors: Ayesha Anjum, Arbaz Basha

Abstract:

Active Galactic Nuclei (AGN) are the actively accreting centers of the galaxies that host supermassive black holes. AGN emits radiation in all wavelengths and also shows variability across all the wavelength bands. The analysis of flux variability tells us about the morphology of the site of emission radiation. Some of the major classifications of AGN are (a) Blazars, with featureless spectra. They are subclassified as BLLacertae objects, Flat Spectrum Radio Quasars (FSRQs), and others; (b) Seyferts with prominent emission line features are classified into Broad Line, Narrow Line Seyferts of Type 1 and Type 2 (c) quasars, and other types. Sloan Digital Sky Survey (SDSS) is an optical telescope based in Mexico that has observed and classified billions of objects based on automated photometric and spectroscopic methods. A sample of blazars is obtained from the third Fermi catalog. For variability analysis, we searched for light curves for these objects in Wide-Field Infrared Survey Explorer (WISE) and Near Earth Orbit WISE (NEOWISE) in two bands: W1 (3.4 microns) and W2 (4.6 microns), reducing the final sample to 256 objects. These objects are also classified into 155 BLLacs, 99 FSRQs, and 2 Narrow Line Seyferts, namely, PMNJ0948+0022 and PKS1502+036. Mid-infrared variability studies of these objects would be a contribution to the literature. With this as motivation, the present work is focused on studying a final sample of 256 objects in general and the Seyferts in particular. Owing to the fact that the classification is automated, SDSS has miclassified these objects into quasars, galaxies, and stars. Reasons for the misclassification are explained in this work. The variability analysis of these objects is done using the method of flux amplitude variability and excess variance. The sample consists of observations in both W1 and W2 bands. PMN J0948+0022 is observed between MJD from 57154.79 to 58810.57. PKS 1502+036 is observed between MJD from 57232.42 to 58517.11, which amounts to a period of over six years. The data is divided into different epochs spanning not more than 1.2 days. In all the epochs, the sources are found to be variable in both W1 and W2 bands. This confirms that the object is variable in mid-infrared wavebands in both long and short timescales. Also, the sources are observed for color variability. Objects either show a bluer when brighter trend (BWB) or a redder when brighter trend (RWB). The possible claim for the object to be BWB (present objects) is that the longer wavelength radiation emitted by the source can be suppressed by the high-energy radiation from the central source. Another result is that the smallest radius of the emission source is one day since the epoch span used in this work is one day. The mass of the black holes at the centers of these sources is found to be less than or equal to 108 solar masses, respectively.

Keywords: active galaxies, variability, Seyfert galaxies, SDSS, WISE

Procedia PDF Downloads 106
189 Solids and Nutrient Loads Exported by Preserved and Impacted Low-Order Streams: A Comparison among Water Bodies in Different Latitudes in Brazil

Authors: Nicolas R. Finkler, Wesley A. Saltarelli, Taison A. Bortolin, Vania E. Schneider, Davi G. F. Cunha

Abstract:

Estimating the relative contribution of nonpoint or point sources of pollution in low-orders streams is an important tool for the water resources management. The location of headwaters in areas with anthropogenic impacts from urbanization and agriculture is a common scenario in developing countries. This condition can lead to conflicts among different water users and compromise ecosystem services. Water pollution also contributes to exporting organic loads to downstream areas, including higher order rivers. The purpose of this research is to preliminarily assess nutrients and solids loads exported by water bodies located in watersheds with different types of land uses in São Carlos - SP (Latitude. -22.0087; Longitude. -47.8909) and Caxias do Sul - RS (Latitude. -29.1634, Longitude. -51.1796), Brazil, using regression analysis. The variables analyzed in this study were Total Kjeldahl Nitrogen (TKN), Nitrate (NO3-), Total Phosphorus (TP) and Total Suspended Solids (TSS). Data were obtained in October and December 2015 for São Carlos (SC) and in November 2012 and March 2013 for Caxias do Sul (CXS). Such periods had similar weather patterns regarding precipitation and temperature. Altogether, 11 sites were divided into two groups, some classified as more pristine (SC1, SC4, SC5, SC6 and CXS2), with predominance of native forest; and others considered as impacted (SC2, SC3, CXS1, CXS3, CXS4 and CXS5), presenting larger urban and/or agricultural areas. Previous linear regression was applied for data on flow and drainage area of each site (R² = 0.9741), suggesting that the loads to be assessed had a significant relationship with the drainage areas. Thereafter, regression analysis was conducted between the drainage areas and the total loads for the two land use groups. The R² values were 0.070, 0.830, 0.752 e 0.455 respectively for SST, TKN, NO3- and TP loads in the more preserved areas, suggesting that the loads generated by runoff are significant in these locations. However, the respective R² values for sites located in impacted areas were respectively 0.488, 0.054, 0.519 e 0.059 for SST, TKN, NO3- and P loads, indicating a less important relationship between total loads and runoff as compared to the previous scenario. This study suggests three possible conclusions that will be further explored in the full-text article, with more sampling sites and periods: a) In preserved areas, nonpoint sources of pollution are more significant in determining water quality in relation to the studied variables; b) The nutrient (TKN and P) loads in impacted areas may be associated with point sources such as domestic wastewater discharges with inadequate treatment levels; and c) The presence of NO3- in impacted areas can be associated to the runoff, particularly in agricultural areas, where the application of fertilizers is common at certain times of the year.

Keywords: land use, linear regression, point and non-point pollution sources, streams, water resources management

Procedia PDF Downloads 286
188 Experimental Study of Energy Absorption Efficiency (EAE) of Warp-Knitted Spacer Fabric Reinforced Foam (WKSFRF) Under Low-Velocity Impact

Authors: Amirhossein Dodankeh, Hadi Dabiryan, Saeed Hamze

Abstract:

Using fabrics to reinforce composites considerably leads to improved mechanical properties, including resistance to the impact load and the energy absorption of composites. Warp-knitted spacer fabrics (WKSF) are fabrics consisting of two layers of warp-knitted fabric connected by pile yarns. These connections create a space between the layers filled by pile yarns and give the fabric a three-dimensional shape. Today because of the unique properties of spacer fabrics, they are widely used in the transportation, construction, and sports industries. Polyurethane (PU) foams are commonly used as energy absorbers, but WKSF has much better properties in moisture transfer, compressive properties, and lower heat resistance than PU foam. It seems that the use of warp-knitted spacer fabric reinforced PU foam (WKSFRF) can lead to the production and use of composite, which has better properties in terms of energy absorption from the foam, its mold formation is enhanced, and its mechanical properties have been improved. In this paper, the energy absorption efficiency (EAE) of WKSFRF under low-velocity impact is investigated experimentally. The contribution of the effect of each of the structural parameters of the WKSF on the absorption of impact energy has also been investigated. For this purpose, WKSF with different structures such as two different thicknesses, small and large mesh sizes, and position of the meshes facing each other and not facing each other were produced. Then 6 types of composite samples with different structural parameters were fabricated. The physical properties of samples like weight per unit area and fiber volume fraction of composite were measured for 3 samples of any type of composites. Low-velocity impact with an initial energy of 5 J was carried out on 3 samples of any type of composite. The output of the low-velocity impact test is acceleration-time (A-T) graph with a lot deviation point, in order to achieve the appropriate results, these points were removed using the FILTFILT function of MATLAB R2018a. Using Newtonian laws of physics force-displacement (F-D) graph was drawn from an A-T graph. We know that the amount of energy absorbed is equal to the area under the F-D curve. Determination shows the maximum energy absorption is 2.858 J which is related to the samples reinforced with fabric with large mesh, high thickness, and not facing of the meshes relative to each other. An index called energy absorption efficiency was defined, which means absorption energy of any kind of our composite divided by its fiber volume fraction. With using this index, the best EAE between the samples is 21.6 that occurs in the sample with large mesh, high thickness, and meshes facing each other. Also, the EAE of this sample is 15.6% better than the average EAE of other composite samples. Generally, the energy absorption on average has been increased 21.2% by increasing the thickness, 9.5% by increasing the size of the meshes from small to big, and 47.3% by changing the position of the meshes from facing to non-facing.

Keywords: composites, energy absorption efficiency, foam, geometrical parameters, low-velocity impact, warp-knitted spacer fabric

Procedia PDF Downloads 142
187 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 503
186 The Effect of Teachers' Personal Values on the Perceptions of the Effective Principal and Student in School

Authors: Alexander Zibenberg, Rima’a Da’As

Abstract:

According to the author’s knowledge, individuals are naturally inclined to classify people as leaders and followers. Individuals utilize cognitive structures or prototypes specifying the traits and abilities that characterize the effective leader (implicit leadership theories) and effective follower in an organization (implicit followership theories). Thus, the present study offers insights into understanding how teachers' personal values (self-enhancement and self-transcendence) explain the preference for styles of effective leader (i.e., principal) and assumptions about the traits and behaviors that characterize effective followers (i.e., student). Beyond the direct effect on perceptions of effective types of leader and follower, the present study argues that values may also interact with organizational and personal contexts in influencing perceptions. Thus authors suggest that teachers' managerial position may moderate the relationships between personal values and perception of the effective leader and follower. Specifically, two key questions are addressed in the present research: (1) Is there a relationship between personal values and perceptions of the effective leader and effective follower? and (2) Are these relationships stable or could they change across different contexts? Two hundred fifty-five Israeli teachers participated in this study, completing questionnaires – about the effective student and effective principal. Results of structural equations modeling (SEM) with maximum likelihood estimation showed: first: the model fit the data well. Second: researchers found a positive relationship between self-enhancement and anti-prototype of the effective principal and anti-prototype of the effective student. The relationship between self-transcendence value and both perceptions were found significant as well. Self-transcendence positively related to the way the teacher perceives the prototype of the effective principal and effective student. Besides, authors found that teachers' managerial position moderates these relationships. The article contributes to the literature both on perceptions and on personal values. Although several earlier studies explored issues of implicit leadership theories and implicit followership theories, personality characteristics (values) have garnered less attention in this matter. This study shows that personal values which are deeply rooted, abstract motivations that guide justify or explain attitudes, norms, opinions and actions explain differences in perception of the effective leader and follower. The results advance the theoretical understanding of the relationship between personal values and individuals’ perceptions in organizations. An additional contribution of this study is the application of the teacher's managerial position to explain a potential boundary condition of the translation of personal values into outcomes. The findings suggest that through the management process in the organization, teachers acquire knowledge and skills which augment their ability (beyond their personal values) to predict perceptions of ideal types of principal and student. The study elucidates the unique role of personal values in understanding an organizational thinking in organization. It seems that personal values might explain the differences in individual preferences of the organizational paradigm (mechanistic vs organic).

Keywords: implicit leadership theories, implicit followership theories, organizational paradigms, personal values

Procedia PDF Downloads 137
185 Analysis of Lesotho Wool Production and Quality Trends 2008-2018

Authors: Papali Maqalika

Abstract:

Lesotho farmers produce significant quantities of Merino wool of a quality competitive on the global market and make a substantial impact on the economy of Lesotho. However, even with the economic contribution, the production and quality information and trends of this fibre has been recognised nor documented. This is a sombre shortcoming as Lesotho wool is unknown on international markets. The situation is worsened by the fact that Lesotho wool is auction together with South African wool, trading and benchmarking Lesotho wool are difficult not to mention attempts to advance its production and quality. Based on the information above, available data on Lesotho wool for 10 years were collected and analysed for trends to used in benchmarking where applicable. The fibre properties analysed include fibre diameter (fineness), vegetable matter and yield, application and price. These were selected because they are fundamental in determining fibre quality and price. Production of wool in Lesotho has increased slightly over the ten years covered by this study. It also became apparent that production and quality trends of Lesotho wool are greatly influenced by the farming practices, breed of sheep and climatic conditions. Greater adoption of the merino sheep breed, sheds/barns and sheep coats are suggested as ways to reduce mortality rate (due to extremely cold temperatures), to reduce the vegetable matter on the fibre thus improving the quality and increase yield per sheep and production as a whole. Some farming practices such as the lack of barns, supplementary feeding and veterinary care present constraints in wool production. The districts in the Highlands region were found to have the highest production of mostly wool, this being ascribed to better pastures, climatic, social and other conditions conducive to wool production. The production of Lesotho wool and its quality can be improved further, possibly because of the interventions the Ministry of Agriculture introduced through the Small Agricultural and Development Project (SADP) and other appropriate initiatives by the National Wool and Mohair Growers Association (NWMGA). The challenge however, remains the lack of direct involvement of the wool growers (farmers) in decisions making and policy development, this potentially influences and may lead to the reluctance to adopt the strategies. In some cases, the wool growers do not receive the benefits associated with the interventions immediately. Based on these discoveries; it is recommended that the relevant educators and researchers in wool and textile science, as well as the local wool farmers in Lesotho, be represented in policy and other decision making forums relating to these interventions. In this way, educational campaigns and training workshops will be demand driven with a better chance of adoption and success. This is because the direct beneficiaries will have been involved at inception and they will have a sense of ownership as well as intent to see them through successfully.

Keywords: lesotho wool, wool quality, wool production, lesotho economy, global market, apparel wool, database, textile science, exports, animal farming practices, intimate apparel, interventions

Procedia PDF Downloads 60
184 Phenolic Composition of Wines from Cultivar Carménère during Aging with Inserts to Barrels

Authors: E. Obreque-Slier, P. Osorio-Umaña, G. Vidal-Acevedo, A. Peña-Neira, M. Medel-Marabolí

Abstract:

Sensory and nutraceutical characteristics of a wine are determined by different chemical compounds, such as organic acids, sugars, alcohols, polysaccharides, aromas, and polyphenols. The polyphenols correspond to secondary metabolites that are associated with the prevention of several pathologies, and those are responsible for color, aroma, bitterness, and astringency in wines. These compounds come from grapes and wood during aging in barrels, which correspond to the format of wood most widely used in wine production. However, the barrels is a high-cost input with a limited useful life (3-4 years). For this reason, some oenological products have been developed in order to renew the barrels and increase their useful life in some years. These formats are being used slowly because limited information exists about the effect on the wine chemical characteristics. The objective of the study was to evaluate the effect of different laubarrel renewal systems (staves and zigzag) on the polyphenolic characteristics of a Carménère wine (Vitis vinifera), an emblematic cultivar of Chile. For this, a completely randomized experimental design with 5 treatments and three replicates per treatment was used. The treatments were: new barrels (T0), used barrels during 4 years (T1), scraped used barrels (T2), used barrels with staves (T3) and used barrels with zigzag (T4). The study was performed for 12 months, and different spectrophotometric parameters (phenols, anthocyanins, and total tannins) and HPLC-DAD (low molecular weight phenols) were evaluated. The wood inputs were donated by Toneleria Nacional and corresponded to products from the same production batch. The total phenols content increased significantly after 40 days, while the total tannin concentration decreased gradually during the study. The anthocyanin concentration increased after 120 days of the assay in all treatments. Comparatively, it was observed that the wine of T2 presented the lowest values of these polyphenols, while the T0 and T4 presented the highest total phenol contents. Also, T1 presented the highest values of total tannins in relation to the rest of the treatments in some samples. The low molecular weight phenolic compounds identified by HPLC-DAD were 7 flavonoids (epigallocatechin, catechin, procyanidin gallate, epicatechin, quercetin, rutin and myricetin) and 14 non-flavonoids (gallic, protocatechuic, hydroxybenzoic, trans-cutaric, vanillinic, caffeic, syringic, p-coumaric and ellagic acids; tyrosol, vanillin, syringaldehyde, trans-resveratrol and cis-resveratrol). Tyrosol was the most abundant compound, whereas ellagic acid was the lowest in the samples. Comparatively, it was observed that the wines of T2 showed the lowest concentrations of flavonoid and non-flavonoid phenols during the study. In contrast, wines of T1, T3, and T4 presented the highest contents of non-flavonoid polyphenols. In summary, the use of barrel renovators (zig zag and staves) is an interesting alternative which would emulate the contribution of polyphenols from the barrels to the wine.

Keywords: barrels, oak wood aging, polyphenols, red wine

Procedia PDF Downloads 172
183 Historical Memory and Social Representation of Violence in Latin American Cinema: A Cultural Criminology Approach

Authors: Maylen Villamanan Alba

Abstract:

Latin America is marked by its history: conquest, colonialism, and slavery left deep footprints in most Latin American countries. Also, the past century has been affected by wars, military dictatorships, and political violence, which profoundly influenced Latin American popular culture. Consequently, reminiscences of historical crimes are frequently present in daily life, media, public opinion, and arts. This legacy is remembered in novels, paintings, songs, and films. In fact, Latin American cinema has a trend which refers to the verisimilitude with reality in fiction films. These films about historical violence are narrated as fictional characters, but their stories are based on real historical contexts. Therefore, cultural criminology has considered films as a significant field to understand social representations of violence related to historical crimes. The aim of the present contribution is to analyze the legacy of past and historical memory in social representations of violence in Latin American cinema as a critical approach to historical crimes. This qualitative research is based on content analysis. The sample is seven multi-award winning films of the International Festival of New Latin American Cinema of Havana. The films selected are Kamchatka, Argentina (2002); Carandiru, Brazil (2003); Enlightened by fire, Argentina (2005); Post-mortem, Chile (2010); No, Chile (2012) Wakolda; Argentina (2013) and The Clan, Argentina (2015). Cultural criminology highlights that cinema shapes meanings of social practices such as historical crimes. Critical criminology offers a critical theory framework to interpret Latin American cinema. This analysis reveals historical conditions deeply associated with power relationships, policy, and inequality issues. As indicated by this theory, violence is characterized as a structural process based on social asymmetries. These social asymmetries are crossed by social scopes, including institutional and personal dimensions. Thus, institutions of the states are depicted through personal stories of characters involved with human conflicts. Intimacy and social background are linked in the personages who simultaneously perform roles such as soldiers, policemen, professionals or inmates and they are at the same time depict as human beings with family, gender, racial, ideological or generational issues. Social representations of violence related to past legacy are a portrait of historical crimes perpetrated against Latin American citizens. Thereby, they have contributed to political positions, social behaviors, and public opinion. The legacy of these historical crimes suggests a path that should never be taken again. It means past legacy is a reminder, a warning, and a historic lesson for Latin American people. Social representations of violence are permeated by historical memory as denunciation under a critical approach.

Keywords: Latin American cinema, historical memory, social representation, violence

Procedia PDF Downloads 119