Search results for: complex pain
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6277

Search results for: complex pain

667 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 77
666 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 322
665 Lineament Analysis as a Method of Mineral Deposit Exploration

Authors: Dmitry Kukushkin

Abstract:

Lineaments form complex grids on Earth's surface. Currently, one particular object of study for many researchers is the analysis and geological interpretation of maps of lineament density in an attempt to locate various geological structures. But lineament grids are made up of global, regional and local components, and this superimposition of lineament grids of various scales (global, regional, and local) renders this method less effective. Besides, the erosion processes and the erosional resistance of rocks lying on the surface play a significant role in the formation of lineament grids. Therefore, specific lineament density map is characterized by poor contrast (most anomalies do not exceed the average values by more than 30%) and unstable relation with local geological structures. Our method allows to confidently determine the location and boundaries of local geological structures that are likely to contain mineral deposits. Maps of the fields of lineament distortion (residual specific density) created by our method are characterized by high contrast with anomalies exceeding the average by upward of 200%, and stable correlation to local geological structures containing mineral deposits. Our method considers a lineament grid as a general lineaments field – surface manifestation of stress and strain fields of Earth associated with geological structures of global, regional and local scales. Each of these structures has its own field of brittle dislocations that appears on the surface of its lineament field. Our method allows singling out local components by suppressing global and regional components of the general lineaments field. The remaining local lineament field is an indicator of local geological structures.The following are some of the examples of the method application: 1. Srednevilyuiskoye gas condensate field (Yakutia) - a direct proof of the effectiveness of methodology; 2. Structure of Astronomy (Taimyr) - confirmed by the seismic survey; 3. Active gold mine of Kadara (Chita Region) – confirmed by geochemistry; 4. Active gold mine of Davenda (Yakutia) - determined the boundaries of the granite massif that controls mineralization; 5. Object, promising to search for hydrocarbons in the north of Algeria - correlated with the results of geological, geochemical and geophysical surveys. For both Kadara and Davenda, the method demonstrated that the intensive anomalies of the local lineament fields are consistent with the geochemical anomalies and indicate the presence of the gold content at commercial levels. Our method of suppression of global and regional components results in isolating a local lineament field. In early stages of a geological exploration for oil and gas, this allows determining boundaries of various geological structures with very high reliability. Therefore, our method allows optimization of placement of seismic profile and exploratory drilling equipment, and this leads to a reduction of costs of prospecting and exploration of deposits, as well as acceleration of its commissioning.

Keywords: lineaments, mineral exploration, oil and gas, remote sensing

Procedia PDF Downloads 304
664 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 152
663 Degradation of the Cu-DOM Complex by Bacteria: A Way to Increase Phytoextraction of Copper in a Vineyard Soil

Authors: Justine Garraud, Hervé Capiaux, Cécile Le Guern, Pierre Gaudin, Clémentine Lapie, Samuel Chaffron, Erwan Delage, Thierry Lebeau

Abstract:

The repeated use of Bordeaux mixture (copper sulphate) and other chemical forms of copper (Cu) has led to its accumulation in wine-growing soils for more than a century, to the point of modifying the ecosystem of these soils. Phytoextraction of copper could progressively reduce the Cu load in these soils, and even to recycle copper (e.g. as a micronutrient in animal nutrition) by cultivating the extracting plants in the inter-row of the vineyards. Soil cleaning up usually requires several years because the chemical speciation of Cu in solution is mainly based on forms complexed with dissolved organic matter (DOM) that are not phytoavailable, unlike the "free" forms (Cu2+). Indeed, more than 98% of Cu in the solution is bound to DOM. The selection and inoculation of invineyardsoils in vineyard soils ofbacteria(bioaugmentation) able to degrade Cu-DOM complexes could increase the phytoavailable pool of Cu2+ in the soil solution (in addition to bacteria which first mobilize Cu in solution from the soil bearing phases) in order to increase phytoextraction performance. In this study, sevenCu-accumulating plants potentially usable in inter-row were tested for their Cu phytoextraction capacity in hydroponics (ray-grass, brown mustard, buckwheat, hemp, sunflower, oats, and chicory). Also, a bacterial consortium was tested: Pseudomonas sp. previously studied for its ability to mobilize Cu through the pyoverdine siderophore (complexing agent) and potentially to degrade Cu-DOM complexes, and a second bacterium (to be selected) able to promote the survival of Pseudomonas sp. following its inoculation in soil. Interaction network method was used based on the notions of co-occurrence and, therefore, of bacterial abundance found in the same soils. Bacteria from the EcoVitiSol project (Alsace, France) were targeted. The final step consisted of incoupling the bacterial consortium with the chosen plant in soil pots. The degradation of Cu-DOMcomplexes is measured on the basis of the absorption index at 254nm, which gives insight on the aromaticity of the DOM. The“free” Cu in solution (from the mobilization of Cu and/or the degradation of Cu-MOD complexes) is assessed by measuring pCu. Eventually, Cu accumulation in plants is measured by ICP-AES. The selection of the plant is currently being finalized. The interaction network method targeted the best positive interactions ofFlavobacterium sp. with Pseudomonassp. These bacteria are both PGPR (plant growth promoting rhizobacteria) with the ability to improve the plant growth and to mobilize Cu from the soil bearing phases (siderophores). Also, these bacteria are known to degrade phenolic groups, which are highly present in DOM. They could therefore contribute to the degradation of DOM-Cu. The results of the upcoming bacteria-plant coupling tests in pots will be also presented.

Keywords: complexes Cu-DOM, bioaugmentation, phytoavailability, phytoextraction

Procedia PDF Downloads 81
662 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 170
661 Frequency of Tube Feeding in Aboriginal and Non-aboriginal Head and Neck Cancer Patients and the Impact on Relapse and Survival Outcomes

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancer and treatments are known for their profound effect on nutrition and tube feeding is a common requirement to maintain nutrition. Aim: We aimed to evaluate the frequency of tube feeding in Aboriginal and non-Aboriginal patients, and to examine the relapse and survival outcomes in patients who require enteral tube feeding. Methods: We performed a retrospective cohort analysis of 320 head and neck cancer patients from a single centre in Western Australia, identifying 80 Aboriginal patients and 240 non-Aboriginal patients matched on a 1:3 ratio by site, histology, rurality, and age. Data collected included patient demographics, tumour features, treatment details, and cancer and survival outcomes. Results: Aboriginal and non-Aboriginal patients required feeding tubes at similar rates (42.5% vs 46.2% respectively), however Aboriginal patients were far more likely to fail to return to oral nutrition, with 26.3% requiring long-term tube feeding versus only 15% of non-Aboriginal patients. In the overall study population, 27.5% required short-term tube feeding, 17.8% required long-term enteral tube nutrition, and 45.3% of patients did not have a feeding tube at any point. Relapse was more common in patients who required tube feeding, with relapses in 42.1% of the patients requiring long-term tube feeding, 31.8% in those requiring a short-term tube, versus 18.9% in the ‘no tube’ group. Survival outcomes for patients who required a long-term tube were also significantly poorer when compared to patients who only required a short-term tube, or not at all. Long-term tube-requiring patients were half as likely to survive (29.8%) compared to patients requiring a short-term tube (62.5%) or no tube at all (63.5%). Patients requiring a long-term tube were twice as likely to die with active disease (59.6%) as patients with no tube (28%), or a short term tube (33%). This may suggest an increased relapse risk in patients who require long-term feeding, due to consequences of malnutrition on cancer and treatment outcomes, although may simply reflect that patients with recurrent disease were more likely to have longer-term swallowing dysfunction due to recurrent disease and salvage treatments. Interestingly long-term tube patients were also more likely to die with no active disease (10.5%) (compared with short-term tube requiring patients (4.6%), or patients with no tube (8%)), which is likely reflective of the increased mortality associated with long-term aspiration and malnutrition issues. Conclusions: Requirement for tube feeding was associated with a higher rate of cancer relapse, and in particular, long-term tube feeding was associated with a higher likelihood of dying from head and neck cancer, but also a higher risk of dying from other causes without cancer relapse. This data reflects the complex effect of head and neck cancer and its treatments on swallowing and nutrition, and ultimately, the effects of malnutrition, swallowing dysfunction, and aspiration on overall cancer and survival outcomes. Tube feeding was seen at similar rates in Aboriginal and non-Aboriginal patient, however failure to return to oral intake with a requirement for a long-term feeding tube was seen far more commonly in the Aboriginal population.

Keywords: head and neck cancer, enteral tube feeding, malnutrition, survival, relapse, aboriginal patients

Procedia PDF Downloads 102
660 Modeling and Analysis Of Occupant Behavior On Heating And Air Conditioning Systems In A Higher Education And Vocational Training Building In A Mediterranean Climate

Authors: Abderrahmane Soufi

Abstract:

The building sector is the largest consumer of energy in France, accounting for 44% of French consumption. To reduce energy consumption and improve energy efficiency, France implemented an energy transition law targeting 40% energy savings by 2030 in the tertiary building sector. Building simulation tools are used to predict the energy performance of buildings but the reliability of these tools is hampered by discrepancies between the real and simulated energy performance of a building. This performance gap lies in the simplified assumptions of certain factors, such as the behavior of occupants on air conditioning and heating, which is considered deterministic when setting a fixed operating schedule and a fixed interior comfort temperature. However, the behavior of occupants on air conditioning and heating is stochastic, diverse, and complex because it can be affected by many factors. Probabilistic models are an alternative to deterministic models. These models are usually derived from statistical data and express occupant behavior by assuming a probabilistic relationship to one or more variables. In the literature, logistic regression has been used to model the behavior of occupants with regard to heating and air conditioning systems by considering univariate logistic models in residential buildings; however, few studies have developed multivariate models for higher education and vocational training buildings in a Mediterranean climate. Therefore, in this study, occupant behavior on heating and air conditioning systems was modeled using logistic regression. Occupant behavior related to the turn-on heating and air conditioning systems was studied through experimental measurements collected over a period of one year (June 2023–June 2024) in three classrooms occupied by several groups of students in engineering schools and professional training. Instrumentation was provided to collect indoor temperature and indoor relative humidity in 10-min intervals. Furthermore, the state of the heating/air conditioning system (off or on) and the set point were determined. The outdoor air temperature, relative humidity, and wind speed were collected as weather data. The number of occupants, age, and sex were also considered. Logistic regression was used for modeling an occupant turning on the heating and air conditioning systems. The results yielded a proposed model that can be used in building simulation tools to predict the energy performance of teaching buildings. Based on the first months (summer and early autumn) of the investigations, the results illustrate that the occupant behavior of the air conditioning systems is affected by the indoor relative humidity and temperature in June, July, and August and by the indoor relative humidity, temperature, and number of occupants in September and October. Occupant behavior was analyzed monthly, and univariate and multivariate models were developed.

Keywords: occupant behavior, logistic regression, behavior model, mediterranean climate, air conditioning, heating

Procedia PDF Downloads 60
659 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context

Authors: Andrea Fiorista

Abstract:

The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.

Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL

Procedia PDF Downloads 87
658 The MHz Frequency Range EM Induction Device Development and Experimental Study for Low Conductive Objects Detection

Authors: D. Kakulia, L. Shoshiashvili, G. Sapharishvili

Abstract:

The results of the study are related to the direction of plastic mine detection research using electromagnetic induction, the development of appropriate equipment, and the evaluation of expected results. Electromagnetic induction sensing is effectively used in the detection of metal objects in the soil and in the discrimination of unexploded ordnances. Metal objects interact well with a low-frequency alternating magnetic field. Their electromagnetic response can be detected at the low-frequency range even when they are placed in the ground. Detection of plastic things such as plastic mines by electromagnetic induction is associated with difficulties. The interaction of non-conducting bodies or low-conductive objects with a low-frequency alternating magnetic field is very weak. At the high-frequency range where already wave processes take place, the interaction increases. Interactions with other distant objects also increase. A complex interference picture is formed, and extraction of useful information also meets difficulties. Sensing by electromagnetic induction at the intermediate MHz frequency range is the subject of research. The concept of detecting plastic mines in this range can be based on the study of the electromagnetic response of non-conductive cavity in a low-conductivity environment or the detection of small metal components in plastic mines, taking into account constructive features. The detector node based on the amplitude and phase detector 'Analog Devices ad8302' has been developed for experimental studies. The node has two inputs. At one of the inputs, the node receives a sinusoidal signal from the generator, to which a transmitting coil is also connected. The receiver coil is attached to the second input of the node. The additional circuit provides an option to amplify the signal output from the receiver coil by 20 dB. The node has two outputs. The voltages obtained at the output reflect the ratio of the amplitudes and the phase difference of the input harmonic signals. Experimental measurements were performed in different positions of the transmitter and receiver coils at the frequency range 1-20 MHz. Arbitrary/Function Generator Tektronix AFG3052C and the eight-channel high-resolution oscilloscope PICOSCOPE 4824 were used in the experiments. Experimental measurements were also performed with a low-conductive test object. The results of the measurements and comparative analysis show the capabilities of the simple detector node and the prospects for its further development in this direction. The results of the experimental measurements are compared and analyzed with the results of appropriate computer modeling based on the method of auxiliary sources (MAS). The experimental measurements are driven using the MATLAB environment. Acknowledgment -This work was supported by Shota Rustaveli National Science Foundation (SRNSF) (Grant number: NFR 17_523).

Keywords: EM induction sensing, detector, plastic mines, remote sensing

Procedia PDF Downloads 149
657 The Role of Islamic Finance and Socioeconomic Factors in Financial Inclusion: A Cross Country Comparison

Authors: Allya Koesoema, Arni Ariani

Abstract:

While religion is only a very minor factor contributing to financial exclusion in most countries, the World Bank 2014 Global Financial Development Report highlighted it as a significant barrier for having a financial account in some Muslim majority countries. This is in part due to the perceived incompatibility between traditional financial institutions practices and Islamic finance principles. In these cases, the development of financial institutions and products that are compatible with the principles of Islamic finance may act as an important lever to increasing formal account ownership. However, there is significant diversity in the relationship between a country’s proportion of Muslim population and its level of financial inclusion. This paper combines data taken from the Global Findex Database, World Development Indicators, and the Pew Research Center to quantitatively explore the relationship between individual and country level religious and socioeconomic factor to financial inclusion. Results from regression analyses show a complex relationship between financial inclusion and religion-related factors in the population both on the individual and country level. Consistent with prior literature, on average the percentage of Islamic population positively correlates with the proportion of unbanked populations who cites religious reasons as a barrier to getting an account. However, its impact varies across several variables. First, a deeper look into countries’ religious composition reveals that the average negative impact of a large Muslim population is not as strong in more religiously diverse countries and less religious countries. Second, on the individual level, among the unbanked, the poorest quintile, least educated, older and the female populations are comparatively more likely to not have an account because of religious reason. Results also show indications that in this case, informal mechanisms partially substitute formal financial inclusion, as indicated by the propensity to borrow from family and friends. The individual level findings are important because the demographic groups that are more likely to cite religious reasons as barriers to formal financial inclusion are also generally perceived to be more vulnerable socially and economically and may need targeted attention. Finally, the number of Islamic financial institutions in a particular country is negatively correlated to the propensity of religious reasons as a barrier to financial inclusion. Importantly, the number of financial institutions in a country also mitigates the negative impact of the proportion of Muslim population, low education and individual age to formal financial inclusion. These results point to the potential importance of Islamic Finance Institutions in increasing global financial inclusion, and highlight the potential importance of looking beyond the proportion of Muslim population to other underlying institutional and socioeconomic factor in maximizing its impact.

Keywords: cross country comparison, financial inclusion, Islamic banking and finance, quantitative methods, socioeconomic factors

Procedia PDF Downloads 192
656 In Vitro Fermentation Of Rich In B-glucan Pleurotus Eryngii Mushroom: Impact On Faecal Bacterial Populations And Intestinal Barrier In Autistic Children

Authors: Georgia Saxami, Evangelia N. Kerezoudi, Evdokia K. Mitsou, Marigoula Vlassopoulou, Georgios Zervakis, Adamantini Kyriacou

Abstract:

Autism Spectrum Disorder (ASD) is a complex group of developmental disorders of the brain, characterized by social and communication dysfunctions, stereotypes and repetitive behaviors. The potential interaction between gut microbiota (GM) and autism has not been fully elucidated. Children with autism often suffer gastrointestinal dysfunctions, while alterations or dysbiosis of GM have also been observed. Treatment with dietary components has been postulated to regulate GM and improve gastrointestinal symptoms, but there is a lack of evidence for such approaches in autism, especially for prebiotics. This study assessed the effects of Pleurotus eryngii mushroom (candidate prebiotic) and inulin (known prebiotic compound) on gut microbial composition, using faecal samples from autistic children in an in vitro batch culture fermentation system. Selected members of GM were enumerated at baseline (0 h) and after 24 h fermentation by quantitative PCR. After 24 h fermentation, inulin and P. eryngii mushroom induced a significant increase in total bacteria and Faecalibacterium prausnitzii compared to the negative control (gut microbiota of each autistic donor with no carbohydrate source), whereas both treatments induced a significant increase in levels of total bacteria, Bifidobacterium spp. and Prevotella spp. compared to baseline (t=0h) (p for all <0.05). Furthermore, this study evaluated the impact of fermentation supernatants (FSs), derived from P. eryngii mushroom or inulin, on the expression levels of tight junctions’ genes (zonulin-1, occludin and claudin-1) in Caco-2 cells stimulated by bacterial lipopolysaccharides (LPS). Pre-incubation of Caco-2 cells with FS from P. eryngii mushroom led to a significant increase in the expression levels of zonulin-1, occludin and claudin-1 genes compared to the untreated cells, the cells that were subjected to LPS and the cells that were challenged with FS from negative control (p for all <0.05). In addition, incubation with FS from P. eryngii mushroom led to the highest mean expression values for zonulin-1 and claudin-1 genes, which differed significantly compared to inulin (p for all <0.05). Overall, this research highlighted the beneficial in vitro effects of P. eryngii mushroom on the composition of GM of autistic children after 24 h of fermentation. Also, our data highlighted the potential preventive effect of P. eryngii FSs against dysregulation of the intestinal barrier, through upregulation of tight junctions’ genes associated with the integrity and function of the intestinal barrier. This research has been financed by "Supporting Researchers with Emphasis on Young Researchers - Round B", Operational Program "Human Resource Development, Education and Lifelong Learning."

Keywords: gut microbiota, intestinal barrier, autism spectrum disorders, Pleurotus Eryngii

Procedia PDF Downloads 166
655 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization

Procedia PDF Downloads 118
654 A Comparative Laboratory Evaluation of Efficacy of Two Fungi: Beauveria bassiana and Acremonium perscinum, on Dichomeris eridantis Meyrick (Lepidoptera: Gelechiidae) Larvae, an Important Pest of Dalbergia sissoo

Authors: Gunjan Srivastava, Shamila Kalia

Abstract:

Dalbergia sissoo Roxb., (Family- Leguminosae; Subfamily- Papilionoideae), is an economically and ecologically important tree species having medicinal value. Of the rich complex of insect fauna, ten have been recognized as potential pests of nurseries and plantations. Present study was conducted to explore an effective ecofriendly control of Dichomeris eridantis Meyrick, an important defoliator pest of D. sissoo. Health and environmental concerns demanded devising a bio-intensive pest management strategy and employing ecofriendly measures. In the present laboratory bioassay two entomopathogenic fungi Acremonium perscinum and Beauveria bassiana were tested and compared for evaluating the efficacy of their seven different concentrations (besides control) against the 3rd, 4th and 5th instar larvae of D. eridantis, on the basis of mean percent mortality data recorded and tabulated for seven days after treatment application. Analysis showed that both treatments vary significantly among themselves. Also, variations amongst instars and duration with respect to their mortality were highly significant (p < .001). All their interactions were found to vary significantly. B. bassiana at 0.25x107 spores / ml spore concentration caused maximum mean percent mortality (62.38%) followed by mean percent mortality at its 0.25x106 spores / ml concentration (56.67%). Mean percent mortality at maximum spore concentration (0.054x107 spores / ml) and next highest spore concentration (0.054 x106 spores / ml) due to A. perscinum treatment were far less effective (mean percent mortality of 45.40% and 31.29%, respectively). At 168 hours mean percent mortality of larval instars due to both fungal treatment applications reached its maximum (52.99%) whereas, at 24 hours mean percent mortality remained least (5.70%). In both cases, treatments were most effective against 3rd instar larvae and least effective against 5th instar larvae. A comparative acccount of efficacy of B. bassiana and A. perscinum on the 3rd, 4th and 5th instar larvae of D. eridantis on 5th, 6th and 7th post treatment observation days after their application, on the basis of their median lethal concentrations (LC50) proved B. bassiana to be more potential microbial pathogen of the two fungal microbes, for all the three instars (3rd, 4th and 5th) of D. eridantis, on all the three days (5th, 6th and 7th post observation days after application of both treatments). Percent mortality of D. eridantis increased in a dose dependent manner. Koch’s Postulates tested positive, thus confirming the pathogenicity of B. bassiana against the larval instars of D. eridantis. LC90 values of 0.280x1011 spores/ml, 0.301x108 spores/ml and 0.262x108 spores/ml concentrations of B. bassiana were standardized which can effectively cause mortality of all the larval instars of D. eridantis in the field after 5th, 6th and 7th day of their application, respectively. Therefore, these concentrations can be safely used in nurseries as well as plantations of D. sissoo for effective control of D. eridantis larvae.

Keywords: Acremonium perscinum, Beauveria bassiana, Dalbergia sissoo, Dichomeris eridantis

Procedia PDF Downloads 225
653 The Effects of Exercise Training on LDL Mediated Blood Flow in Coronary Artery Disease: A Systematic Review

Authors: Aziza Barnawi

Abstract:

Background: Regular exercise reduces risk factors associated with cardiovascular diseases. Over the past decade, exercise interventions have been introduced to reduce the risk of and prevent coronary artery disease (CAD). Elevated low-density lipoproteins (LDL) contribute to the formation of atherosclerosis, its manifestations on the endothelial narrow the coronary artery and affect the endothelial function. Therefore, flow-mediated dilation (FMD) technique is used to assess the function. The results of previous studies have been inconsistent and difficult to interpret across different types of exercise programs. The relationship between exercise therapy and lipid levels has been extensively studied, and it is known to improve the lipid profile and endothelial function. However, the effectiveness of exercise in altering LDL levels and improving blood flow is controversial. Objective: This review aims to explore the evidence and quantify the impact of exercise training on LDL levels and vascular function by FMD. Methods: Electronic databases were searched PubMed, Google Scholar, Web of Science, the Cochrane Library, and EBSCO using the keywords: “low and/or moderate aerobic training”, “blood flow”, “atherosclerosis”, “LDL mediated blood flow”, “Cardiac Rehabilitation”, “low-density lipoproteins”, “flow-mediated dilation”, “endothelial function”, “brachial artery flow-mediated dilation”, “oxidized low-density lipoproteins” and “coronary artery disease”. The studies were conducted for 6 weeks or more and influenced LDL levels and/or FMD. Studies with different intensity training and endurance training in healthy or CAD individuals were included. Results: Twenty-one randomized controlled trials (RCTs) (14 FMD and 7 LDL studies) with 776 participants (605 exercise participants and 171 control participants) met eligibility criteria and were included in the systematic review. Endurance training resulted in a greater reduction in LDL levels and their subfractions and a better FMD response. Overall, the training groups showed improved physical fitness status compared with the control groups. Participants whose exercise duration was ≥150 minutes /week had significant improvement in FMD and LDL levels compared with those with <150 minutes/week.Conclusion: In conclusion, although the relationship between physical training, LDL levels, and blood flow in CAD is complex and multifaceted, there are promising results for controlling primary and secondary prevention of CAD by exercise. Exercise training, including resistance, aerobic, and interval training, is positively correlated with improved FMD. However, the small body of evidence for LDL studies (resistance and interval training) did not prove to be significantly associated with improved blood flow. Increasing evidence suggests that exercise training is a promising adjunctive therapy to improve cardiovascular health, potentially improving blood flow and contributing to the overall management of CAD.

Keywords: exercise training, low density lipoprotein, flow mediated dilation, coronary artery disease

Procedia PDF Downloads 72
652 Phytochemical and Antimicrobial Properties of Zinc Oxide Nanocomposites on Multidrug-Resistant E. coli Enzyme: In-vitro and in-silico Studies

Authors: Callistus I. Iheme, Kenneth E. Asika, Emmanuel I. Ugwor, Chukwuka U. Ogbonna, Ugonna H. Uzoka, Nneamaka A. Chiegboka, Chinwe S. Alisi, Obinna S. Nwabueze, Amanda U. Ezirim, Judeanthony N. Ogbulie

Abstract:

Antimicrobial resistance (AMR) is a major threat to the global health sector. Zinc oxide nanocomposites (ZnONCs), composed of zinc oxide nanoparticles and phytochemicals from Azadirachta indica aqueous leaf extract, were assessed for their physico-chemicals, in silico and in vitro antimicrobial properties on multidrug-resistant Escherichia coli enzymes. Gas chromatography coupled with mass spectroscope (GC-MS) analysis on the ZnONCs revealed the presence of twenty volatile phytochemical compounds, among which is scoparone. Characterization of the ZnONCs was done using ultraviolet-visible spectroscopy (UV-vis), energy dispersive spectroscopy (EDX), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and x-ray diffractometer (XRD). Dehydrogenase enzyme converts colorless 2,3,5-triphenyltetrazolium chloride to the red triphenyl formazan (TPF). The rate of formazan formation in the presence of ZnONCs is proportional to the enzyme activities. The color formation is extracted and determined at 500 nm, and the percentage of enzyme activity is calculated. To determine the bioactive components of the ZnONCs, characterize their binding to enzymes, and evaluate the enzyme-ligand complex stability, respectively Discrete Fourier Transform (DFT) analysis, docking, and molecular dynamics simulations will be employed. The results showed arrays of ZnONCs nanorods with maximal absorption wavelengths of 320 nm and 350 nm and thermally stable at the temperature range of 423.77 to 889.69 ℃. In vitro study assessed the dehydrogenase inhibitory properties of the ZnONCs, conjugate of ZnONCs and ampicillin (ZnONCs-amp), the aqueous leaf extract of A. indica, and ampicillin (standard drug). The findings revealed that at the concentration of 500 μm/mL, 57.89 % of the enzyme activities were inhibited by ZnONCs compared to 33.33% and 21.05% of the standard drug (Ampicillin), and the aqueous leaf extract of the A. indica respectively. The inhibition of the enzyme activities by the ZnONCs at 500 μm/mL was further enhanced to 89.74 % by conjugating with Ampicillin. In silico study on the ZnONCs revealed scoparone as the most viable competitor of nicotinamide adenine dinucleotide (NAD⁺) for the coenzyme binding pocket on E. coli malate and histidinol dehydrogenase. From the findings, it can be concluded that the scoparone components of the nanocomposites in synergy with the zinc oxide nanoparticles inhibited E. coli malate and histidinol dehydrogenase by competitively binding to the NAD⁺ pocket and that the conjugation of the ZnONCs with ampicillin further enhanced the antimicrobial efficiency of the nanocomposite against multidrug resistant E. coli.

Keywords: antimicrobial resistance, dehydrogenase activities, E. coli, zinc oxide nanocomposites

Procedia PDF Downloads 49
651 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium

Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas

Abstract:

Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.

Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides

Procedia PDF Downloads 435
650 The Impact of Encapsulated Raspberry Juice on the Surface Colour of Enriched White Chocolate

Authors: Ivana Loncarevic, Biljana Pajin, Jovana Petrovic, Aleksandar Fistes, Vesna Tumbas Saponjac, Danica Zaric

Abstract:

Chocolate is a complex rheological system usually defined as a suspension consisting of non-fat particles dispersed in cocoa butter as a continuous fat phase. Dark chocolate possesses polyphenols as major constituents whose dietary consumption has been associated with beneficial effects. Milk chocolate is formulated with a lower percentage of cocoa bean liquor than dark chocolate and it often contains lower amounts of polyphenols, while in white chocolate the fat-free cocoa solids are left out completely. Following the current trend of development of functional foods, there is an idea to create enriched white chocolate with the addition of encapsulated bioactive compounds from berry fruits. The aim of this study was to examine the surface colour of enriched white chocolate with the addition of 6, 8, and 10% of raspberry juice encapsulated in maltodextrins, in order to preserve the stability, bioactivity, and bioavailability of the active ingredients. The surface color of samples was measured by MINOLTA Chroma Meter CR-400 (Minolta Co., Ltd., Osaka, Japan) using D 65 lighting, a 2º standard observer angle and an 8-mm aperture in the measuring head. The following CIELab color coordinates were determined: L* – lightness, a* – redness to greenness and b* – yellowness to blueness. The addition of raspberry encapsulates led to the creation of new type of enriched chocolate. Raspberry encapsulate changed the values of the lightness (L*), a* (red tone) and b* (yellow tone) measured on the surface of enriched chocolate in accordance with applied concentrations. White chocolate has significantly (p < 0.05) highest L* (74.6) and b* (20.31) values of all samples indicating the bright surface of the white chocolate, as well as a high share of a yellow tone. At the same time, white chocolate has the negative a* value (-1.00) on its surface which includes green tones. Raspberry juice encapsulate has the darkest surface with significantly (p < 0.05) lowest value of L* (42.75), where increasing of its concentration in enriched chocolates decreases their L* values. Chocolate with 6% of encapsulate has significantly (p < 0.05) highest value of L* (60.56) in relation to enriched chocolate with 8% of encapsulate (53.57), and 10% of encapsulate (51.01). a* value measured on the surface of white chocolate is negative (-1.00) tending towards green tones. Raspberry juice encapsulates increases red tone in enriched chocolates in accordance with the added amounts (23.22, 30.85, and 33.32 in enriched chocolates with 6, 8, and 10% encapsulated raspberry juice, respectively). The presence of yellow tones in enriched chocolates significantly (p < 0.05) decreases with the addition of E (with b* value 5.21), from 10.01 in enriched chocolate with a minimal amount of raspberry juice encapsulates to 8.91 in chocolate with a maximum concentration of raspberry juice encapsulate. The addition of encapsulated raspberry juice to white chocolate led to the creation of new type of enriched chocolate with attractive color. The research in this paper was conducted within the project titled ‘Development of innovative chocolate products fortified with bioactive compounds’ (Innovation Fund Project ID 50051).

Keywords: color, encapsulated raspberry juice, polyphenols, white chocolate

Procedia PDF Downloads 183
649 Fields of Power, Visual Culture, and the Artistic Practice of Two 'Unseen' Women of Central Brazil

Authors: Carolina Brandão Piva

Abstract:

In our visual culture, images play a newly significant role in the basis of a complex dialogue between imagination, creativity, and social practice. Insofar as imagination has broken out of the 'special expressive space of art' to become a part of the quotidian mental work of ordinary people, it is pertinent to recognize that visual representation can no longer be assumed as if in a domain detached from everyday life or exclusively 'centered' within the limited frame of 'art history.' The approach of Visual Culture as a field of study is, in this sense, indispensable to comprehend that not only 'the image,' but also 'the imagined' and 'the imaginary' are produced in the plurality of social interactions; crucial enough, this assertion directs us to something new in contemporary cultural processes, namely both imagination and image production constitute a social practice. This paper starts off with this approach and seeks to examine the artistic practice of two women from the State of Goiás, Brazil, who are ordinary citizens with their daily activities and narratives but also dedicated to visuality production. With no formal training from art schools, branded or otherwise, Maria Aparecida de Souza Pires deploys 'waste disposal' of daily life—from car tires to old work clothes—as a trampoline for art; also adept at sourcing raw materials collected from her surroundings, she manipulates raw hewn wood, tree trunks, plant life, and various other pieces she collects from nature giving them new meaning and possibility. Hilda Freire works with sculptures in clay using different scales and styles; her art focuses on representations of women and pays homage to unprivileged groups such as the practitioners of African-Brazilian religions, blue-collar workers, poor live-in housekeepers, and so forth. Although they have never been acknowledged by any mainstream art institution in Brazil, whose 'criterion of value' still favors formally trained artists, Maria Aparecida de Souza Pires, and Hilda Freire have produced visualities that instigate 'new ways of seeing,' meriting cultural significance in many ways. Their artworks neither descend from a 'traditional' medium nor depend on 'canonical viewing settings' of visual representation; rather, they consist in producing relationships with the world which do not result in 'seeing more,' but 'at least differently.' From this perspective, the paper finally demonstrates that grouping this kind of artistic production under the label of 'mere craft' has much more to do with who is privileged within the fields of power in art system, who we see and who we do not see, and whose imagination of what is fed by which visual images in Brazilian contemporary society.

Keywords: visual culture, artistic practice, women's art in the Brazilian State of Goiás, Maria Aparecida de Souza Pires, Hilda Freire

Procedia PDF Downloads 150
648 Chebyshev Collocation Method for Solving Heat Transfer Analysis for Squeezing Flow of Nanofluid in Parallel Disks

Authors: Mustapha Rilwan Adewale, Salau Ayobami Muhammed

Abstract:

This study focuses on the heat transfer analysis of magneto-hydrodynamics (MHD) squeezing flow between parallel disks, considering a viscous incompressible fluid. The upper disk exhibits both upward and downward motion, while the lower disk remains stationary but permeable. By employing similarity transformations, a system of nonlinear ordinary differential equations is derived to describe the flow behavior. To solve this system, a numerical approach, namely the Chebyshev collocation method, is utilized. The study investigates the influence of flow parameters and compares the obtained results with existing literature. The significance of this research lies in understanding the heat transfer characteristics of MHD squeezing flow, which has practical implications in various engineering and industrial applications. By employing the similarity transformations, the complex governing equations are simplified into a system of nonlinear ordinary differential equations, facilitating the analysis of the flow behavior. To obtain numerical solutions for the system, the Chebyshev collocation method is implemented. This approach provides accurate approximations for the nonlinear equations, enabling efficient computations of the heat transfer properties. The obtained results are compared with existing literature, establishing the validity and consistency of the numerical approach. The study's major findings shed light on the influence of flow parameters on the heat transfer characteristics of the squeezing flow. The analysis reveals the impact of parameters such as magnetic field strength, disk motion amplitude, fluid viscosity on the heat transfer rate between the disks, the squeeze number(S), suction/injection parameter(A), Hartman number(M), Prandtl number(Pr), modified Eckert number(Ec), and the dimensionless length(δ). These findings contribute to a comprehensive understanding of the system's behavior and provide insights for optimizing heat transfer processes in similar configurations. In conclusion, this study presents a thorough heat transfer analysis of magneto-hydrodynamics squeezing flow between parallel disks. The numerical solutions obtained through the Chebyshev collocation method demonstrate the feasibility and accuracy of the approach. The investigation of flow parameters highlights their influence on heat transfer, contributing to the existing knowledge in this field. The agreement of the results with previous literature further strengthens the reliability of the findings. These outcomes have practical implications for engineering applications and pave the way for further research in related areas.

Keywords: squeezing flow, magneto-hydro-dynamics (MHD), chebyshev collocation method(CCA), parallel manifolds, finite difference method (FDM)

Procedia PDF Downloads 75
647 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History

Authors: Carmen Noheda

Abstract:

This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.

Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music

Procedia PDF Downloads 190
646 Analysis of Resistance and Virulence Genes of Gram-Positive Bacteria Detected in Calf Colostrums

Authors: C. Miranda, S. Cunha, R. Soares, M. Maia, G. Igrejas, F. Silva, P. Poeta

Abstract:

The worldwide inappropriate use of antibiotics has increased the emergence of antimicrobial-resistant microorganisms isolated from animals, humans, food, and the environment. To combat this complex and multifaceted problem is essential to know the prevalence in livestock animals and possible ways of transmission among animals and between these and humans. Enterococci species, in particular E. faecalis and E. faecium, are the most common nosocomial bacteria, causing infections in animals and humans. Thus, the aim of this study was to characterize resistance and virulence factors genes among two enterococci species isolated from calf colostrums in Portuguese dairy farms. The 55 enterococci isolates (44 E. faecalis and 11 E. faecium) were tested for the presence of the resistance genes for the following antibiotics: erythromicyn (ermA, ermB, and ermC), tetracycline (tetL, tetM, tetK, and tetO), quinupristin/dalfopristin (vatD and vatE) and vancomycin (vanB). Of which, 25 isolates (15 E. faecalis and 10 E. faecium) were tested until now for 8 virulence factors genes (esp, ace, gelE, agg, cpd, cylA, cylB, and cylLL). The resistance and virulence genes were performed by PCR, using specific primers and conditions. Negative and positive controls were used in all PCR assays. All enterococci isolates showed resistance to erythromicyn and tetracycline through the presence of the genes: ermB (n=29, 53%), ermC (n=10, 18%), tetL (n=49, 89%), tetM (n=39, 71%) and tetK (n=33, 60%). Only two (4%) E. faecalis isolates showed the presence of tetO gene. No resistance genes for vancomycin were found. The virulence genes detected in both species were cpd (n=17, 68%), agg (n=16, 64%), ace (n=15, 60%), esp (n=13, 52%), gelE (n=13, 52%) and cylLL (n=8, 32%). In general, each isolate showed at least three virulence genes. In three E. faecalis isolates was not found virulence genes and only E. faecalis isolates showed virulence genes for cylA (n=4, 16%) and cylB (n=6, 24%). In conclusion, these colostrum samples that were consumed by calves demonstrated the presence of antibiotic-resistant enterococci harbored virulence genes. This genotypic characterization is crucial to control the antibiotic-resistant bacteria through the implementation of restricts measures safeguarding public health. Acknowledgements: This work was funded by the R&D Project CAREBIO2 (Comparative assessment of antimicrobial resistance in environmental biofilms through proteomics - towards innovative theragnostic biomarkers), with reference NORTE-01-0145-FEDER-030101 and PTDC/SAU-INF/30101/2017, financed by the European Regional Development Fund (ERDF) through the Northern Regional Operational Program (NORTE 2020) and the Foundation for Science and Technology (FCT). This work was supported by the Associate Laboratory for Green Chemistry - LAQV which is financed by national funds from FCT/MCTES (UIDB/50006/2020 and UIDP/50006/2020).

Keywords: antimicrobial resistance, calf, colostrums, enterococci

Procedia PDF Downloads 197
645 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments

Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora

Abstract:

Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.

Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver

Procedia PDF Downloads 314
644 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire

Authors: Vinay A. Sharma, Shiva Prasad H. C.

Abstract:

The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.

Keywords: decision-making, leadership, logic, strategic management

Procedia PDF Downloads 108
643 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects

Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town

Abstract:

The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.

Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry

Procedia PDF Downloads 92
642 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 114
641 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction

Authors: Alisawi Alaa T., Collins P. E. F.

Abstract:

The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.

Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard

Procedia PDF Downloads 99
640 Organ Donation after Medical Aid in Dying: A Critical Study of Clinical Processes and Legal Rules in Place

Authors: Louise Bernier

Abstract:

Under some jurisdictions (including Canada), eligible patients can request and receive medical assistance in dying (MAiD) through lethal injections, inducing their cardiocirculatory death. Those same patients can also wish to donate their organs in the process. If they qualify as organ donors, a clinical and ethical rule called the 'dead donor rule' (DDR) requires the transplant teams to wait after cardiocirculatory death is confirmed, followed by a 'no touch' period (5 minutes in Canada) before they can proceed with organ removal. The medical procedures (lethal injections) as well as the delays associated with the DDR can damage organs (mostly thoracic organs) due to prolonged anoxia. Yet, strong scientific evidences demonstrate that operating differently and reconsidering the DDR would result in more organs of better quality available for transplant. This idea generates discomfort and resistance, but it is also worth considering, especially in a context of chronic shortage of available organs. One option that could be examined for MAiD’ patients who wish and can be organ donors would be to remove vital organs while patients are still alive (and under sedation). This would imply accepting that patient’s death would occur through organ donation instead of lethal injections required under MAiD’ legal rules. It would also mean that patients requesting MAiD and wishing to be organ donors could aspire to donate better quality organs, including their heart, an altruistic gesture that carries important symbolic value for many donors and their families. Following a patient centered approach, our hypothesis is that preventing vital organ donation from a living donor in all circumstance is neither perfectly coherent with how legal mentalities have evolved lately in the field of fundamental rights nor compatible with the clinical and ethical frameworks that shape the landscape in which those complex medical decisions unfold. Through a study of the legal, ethical, and clinical rules in place, both at the national and international levels, this analysis raises questions on the numerous inconsistencies associated with respecting the DDR with patients who have chosen to die through MAiD. We will begin with an assessment of the erosion of certain national legal frameworks that pertain to the sacred nature of the right to life which now also includes the right to choose how one wishes to die. We will then study recent innovative clinical protocols tested in different countries to help address acute organ shortage problems in creative ways. We will conclude this analysis with an ethical assessment of the situation, referring to principles such as justice, autonomy, altruism, beneficence, and non-malfeasance. This study will build a strong argument in favor of starting to allow vital organ donations from living donors in countries where MAiD is already permitted.

Keywords: altruism, autonomy, dead donor rule, medical assistance in dying, non-malfeasance, organ donation

Procedia PDF Downloads 178
639 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach

Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta

Abstract:

Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.

Keywords: compostela, literary geography, literary cartography, tourism

Procedia PDF Downloads 392
638 Structure and Dimensions Of Teacher Professional Identity

Authors: Vilma Zydziunaite, Gitana Balezentiene, Vilma Zydziunaite

Abstract:

Teaching is one of most responsible profession, and it is not only a job of an artisan. This profes-sion needs a developed ability to identify oneself with the chosen teaching profession. Research questions: How teachers characterize their authentic individual professional identity? What factors teachers exclude, which support and limit the professional identity? Aim was to develop the grounded theory (GT) about teacher’s professional identity (TPI). Research methodology is based on Charmaz GT version. Data were collected via semi-structured interviews with the he sample of 12 teachers. Findings. 15 extracted categories revealed that the core of TPI is teacher’s professional calling. Premises of TPI are family support, motives for choos-ing teacher’s profession, teacher’s didactic competence. Context of TPI consists of teacher compli-ance with the profession, purposeful preparation for pedagogical studies, professional growth. The strategy of TPI is based on teacher relationship with school community strengthening. The profes-sional frustration limits the TPI. TPI outcome includes teacher recognition, authority; professional mastership, professionalism, professional satisfaction. Dimensions of TPI GT the past (reaching teacher’s profession), present (teacher’s commitment to professional activity) and future (teacher’s profession reconsideration). Conclusions. The substantive GT describes professional identity as complex, changing and life-long process, which develops together with teacher’s personal identity and is connected to professional activity. The professional decision "to be a teacher" is determined by the interaction of internal (professional vocation, personal characteristics, values, self-image, talents, abilities) and external (family, friends, school community, labor market, working condi-tions) factors. The dimensions of the TPI development includes: the past (the pursuit of the teaching profession), the present (the teacher's commitment to professional activity) and the future (the revi-sion of the teaching profession). A significant connection emerged - as the teacher's professional commitment strengthens (creating a self-image, growing the teacher's professional experience, recognition, professionalism, mastery, satisfaction with pedagogical activity), the dimension of re-thinking the teacher's profession weakens. This proves that professional identity occupies an im-portant place in a teacher's life and it affects his professional success and job satisfaction. Teachers singled out the main factors supporting a teacher's professional identity: their own self-image per-ception, professional vocation, positive personal qualities, internal motivation, teacher recognition, confidence in choosing a teaching profession, job satisfaction, professional knowledge, professional growth, good relations with the school community, pleasant experiences, quality education process, excellent student achievements.

Keywords: grounded theory, teacher professional identity, semi-structured interview, school, students, school community, family

Procedia PDF Downloads 74