Search results for: fast generalized multi-directional Radon transform
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3962

Search results for: fast generalized multi-directional Radon transform

242 Play, Practice and Perform: The Pathway to Becoming and Belonging as an Engineer

Authors: Rick Evans

Abstract:

Despite over 40 years of research into why women choose not to enroll or leave undergraduate engineering programs, along with the subsequent and serious efforts to attract more women, women receiving bachelor's degrees in engineering in the US have remained disappointingly low. We know that even despite their struggles to become more welcoming and inclusive, engineering programs remain gendered, raced and classed. However, our research team has found that women who participate and indeed thrive in undergraduate engineering project teams do so in numbers that far exceed their participation in undergraduate programs. We believe part of the answer lies in the ways that project teams facilitate experiential learning, specifically providing opportunities for members to play, practice and perform. We employ a multi-case study method and assume a feminist, activist and interpretive perspective. We seek to generate concrete and context-dependent knowledge in order to explore potentially new variables and hypotheses. Our focus is to learn from those select women who are thriving. For this oral or e-poster presentation, we will focus on the results of the second of our semi-structured interviews – the learning journey interview. During this interview, we ask participants to tell us the story/ies of their participation in project teams. Our results suggest these women find joy in their experience of developing and applying engineering expertise. They experience this joy and develop their expertise in the highly patterned progression of play, practice and performance. Play is a purposeful activity in which someone enters an imaginary world, a world not yet real to them. However, this imaginary world is still very much connected to the real world, in this case, a particular kind of engineering, in that the ways of engaging are already established, codified and rule-governed. As such, these women are novices motivated to join a community of actors. Practice, better understood as practices, a count noun, is an embodied, materially interconnected collection of actions organized around the shared understandings of that community of actors. Those shared understandings reveal a social order – a particular field of engineering. No longer novices, these women begin to develop and display their emergent identities as engineers. Perform is activity meant either to demonstrate competence and/or to enable, even teach play and practice to others. As performers, these women participants become models for others. They direct play and practice, contextualizing both within a field of engineering and the specific aims of the project team community. By playing, practicing and performing engineering, women claim their identities as engineers and, equally important, have those identities acknowledged by team members. If we hope to transform our gendered, raced, classed institutions, we need to learn more about women who thrive within those institutions. We need to learn more about their processes of becoming and belonging as engineers. Our research presentation begins with a description of project teams and our multi-case study method. We then offer detailed descriptions of play, practice, and performance using the voices of women in project teams.

Keywords: engineering education, gender, identity, project teams

Procedia PDF Downloads 97
241 Tectono-Stratigraphic Architecture, Depositional Systems and Salt Tectonics to Strike-Slip Faulting in Kribi-Campo-Cameroon Atlantic Margin with an Unsupervised Machine Learning Approach (West African Margin)

Authors: Joseph Bertrand Iboum Kissaaka, Charles Fonyuy Ngum Tchioben, Paul Gustave Fowe Kwetche, Jeannette Ngo Elogan Ntem, Joseph Binyet Njebakal, Ribert Yvan Makosso-Tchapi, François Mvondo Owono, Marie Joseph Ntamak-Nida

Abstract:

Located in the Gulf of Guinea, the Kribi-Campo sub-basin belongs to the Aptian salt basins along the West African Margin. In this paper, we investigated the tectono-stratigraphic architecture of the basin, focusing on the role of salt tectonics and strike-slip faults along the Kribi Fracture Zone with implications for reservoir prediction. Using 2D seismic data and well data interpreted through sequence stratigraphy with integrated seismic attributes analysis with Python Programming and unsupervised Machine Learning, at least six second-order sequences, indicating three main stages of tectono-stratigraphic evolution, were determined: pre-salt syn-rift, post-salt rift climax and post-rift stages. The pre-salt syn-rift stage with KTS1 tectonosequence (Barremian-Aptian) reveals a transform rifting along NE-SW transfer faults associated with N-S to NNE-SSW syn-rift longitudinal faults bounding a NW-SE half-graben filled with alluvial to lacustrine-fan delta deposits. The post-salt rift-climax stage (Lower to Upper Cretaceous) includes two second-order tectonosequences (KTS2 and KTS3) associated with the salt tectonics and Campo High uplift. During the rift-climax stage, the growth of salt diapirs developed syncline withdrawal basins filled by early forced regression, mid transgressive and late normal regressive systems tracts. The early rift climax underlines some fine-grained hangingwall fans or delta deposits and coarse-grained fans from the footwall of fault scarps. The post-rift stage (Paleogene to Neogene) contains at least three main tectonosequences KTS4, KTS5 and KTS6-7. The first one developed some turbiditic lobe complexes considered as mass transport complexes and feeder channel-lobe complexes cutting the unstable shelf edge of the Campo High. The last two developed submarine Channel Complexes associated with lobes towards the southern part and braided delta to tidal channels towards the northern part of the Kribi-Campo sub-basin. The reservoir distribution in the Kribi-Campo sub-basin reveals some channels, fan lobes reservoirs and stacked channels reaching up to the polygonal fault systems.

Keywords: tectono-stratigraphic architecture, Kribi-Campo sub-basin, machine learning, pre-salt sequences, post-salt sequences

Procedia PDF Downloads 18
240 Distribution and Diversity of Pyrenocarpous Lichens in India with Special Reference to Forest Health

Authors: Gaurav Kumar Mishra, Sanjeeva Nayaka, Dalip Kumar Upreti

Abstract:

Our nature exhibited presence of a number of unique plants which can be used as indicator of environmental condition of particular place. Lichens are unique plant which has an ability to absorb not only organic, inorganic and metaloties but also absorb radioactive nuclide substances present in the environment. In the present study pyrenocarpous lichens will used as indicator of good forest health in a particular place. The Pyrenocarpous lichens are simple crust forming with black dot like perithecia have few characters for their taxonomical segregation as compared to their foliose and fruticose brethrean. The thallus colour and nature, presence and absence of hypothallus are only few characters of thallus are used to segregate the pyrenocarpous taxa. The fruiting bodies of pyrenolichens i.e. ascocarps are perithecia. The perithecia and the contents found within them posses many important criteria for the segregation of pyrenocarpous lichen taxa. The ascocarp morphology, ascocarp arrangement, the perithecial wall, ascocarp shape and colour, ostiole shape and position, ostiole colour, ascocarp anatomy including type of paraphyses, asci shape and size, ascospores septation, ascospores wall and periphyses are the valuable charcters used for segregation of different pyrenocarpous lichen taxa. India is represented by the occurrence of the 350 species of 44 genera and eleven families. Among the different genera Pyrenula is dominant with 82 species followed by the Porina with 70 species. Recently, systematic of the pyrenocarpous lichens have been revised by American and European lichenologists using phylogenetic methods. Still the taxonomy of pyrenocarpous lichens is in flux and information generated after the completion of this study will play vital role in settlement of the taxonomy of this peculiar group of lichens worldwide. The Indian Himalayan region exhibit rich diversity of pyrenocarpous lichens in India. The western Himalayan region has luxuriance of pyrenocarpous lichens due to its unique topography and climate condition. However, the eastern Himalayan region has rich diversity of pyrenocarpous lichens due to its warmer and moist climate condition. The rich moist and warmer climate in eastern Himalayan region supports forest with dominance of evergreen tree vegetation. The pyrenocarpous lichens communities are good indicator of young and regenerated forest type. The rich diversity of lichens clearly indicates that moist of the forest within the eastern Himalayan region has good health of forest. Due to fast pace of urbanization and other developmental activities will defiantly have adverse effects on the diversity and distribution of pyrenocarpous lichens in different forest type and the present distribution pattern will act as baseline data for carried out future biomonitoring studies in the area.

Keywords: lichen diversity, indicator species, environmental factors, pyrenocarpous

Procedia PDF Downloads 121
239 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 93
238 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis

Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger

Abstract:

In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.

Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis

Procedia PDF Downloads 302
237 Vertical Village Buildings as Sustainable Strategy to Re-Attract Mega-Cities in Developing Countries

Authors: M. J. Eichner, Y. S. Sarhan

Abstract:

Overall study purpose has been the evaluation of ‘Vertical Villages’ as a new sustainable building typology, reducing significantly negative impacts of rapid urbanization processes in third world capital cities. Commonly in fast-growing cities, housing and job supply, educational and recreational opportunities, as well as public transportation infrastructure, are not accommodating rapid population growth, exposing people to high noise and emission polluted living environments with low-quality neighborhoods and a lack of recreational areas. Like many others, Egypt’s capital city Cairo, according to the UN facing annual population growth rates of up to 428.000 people, is struggling to address the general deterioration of urban living conditions. New settlements typologies and urban reconstruction approach hardly follow sustainable urbanization principles or socio-ecologic urbanization models with severe effects not only for inhabitants but also for the local environment and global climate. The authors prove that ‘Vertical Village’ buildings can offer a sustainable solution for increasing urban density with at the same time improving the living quality and urban environment significantly. Inserting them within high-density urban fabrics the ecologic and socio-cultural conditions of low-quality neighborhoods can be transformed towards districts, considering all needs of sustainable and social urban life. This study analyzes existing building typologies in Cairo’s «low quality - high density» districts Ard el Lewa, Dokki and Mohandesen according to benchmarks for sustainable residential buildings, identifying major problems and deficits. In 3 case study design projects, the sustainable transformation potential through ‘Vertical Village’ buildings are laid out and comparative studies show the improvement of the urban microclimate, safety, social diversity, sense of community, aesthetics, privacy, efficiency, healthiness and accessibility. The main result of the paper is that the disadvantages of density and overpopulation in developing countries can be converted with ‘Vertical Village’ buildings into advantages, achieving attractive and environmentally friendly living environments with multiple synergies. The paper is documenting based on scientific criteria that mixed-use vertical building structures, designed according to sustainable principles of low rise housing, can serve as an alternative to convert «low quality - high density» districts in megacities, opening a pathway for governments to achieve sustainable urban transformation goals. Neglected informal urban districts, home to millions of the poorer population groups, can be converted into healthier living and working environments.

Keywords: sustainable, architecture, urbanization, urban transformation, vertical village

Procedia PDF Downloads 84
236 Sustainability of the Built Environment of Ranchi District

Authors: Vaidehi Raipat

Abstract:

A city is an expression of coexistence between its users and built environment. The way in which its spaces are animated signify the quality of this coexistence. Urban sustainability is the ability of a city to respond efficiently towards its people, culture, environment, visual image, history, visions and identity. The quality of built environment determines the quality of our lifestyles, but poor ability of the built environment to adapt and sustain itself through the changes leads to degradation of cities. Ranchi was created in November 2000, as the capital of the newly formed state Jharkhand, located on eastern side of India. Before this Ranchi was known as summer capital of Bihar and was a little larger than a town in terms of development. But since then it has been vigorously expanding in size, infrastructure as well as population. This sudden expansion has created a stress on existing built environment. The large forest covers, agricultural land, diverse culture and pleasant climatic conditions have degraded and decreased to a large extent. Narrow roads and old buildings are unable to bear the load of the changing requirements, fast improving technology and growing population. The built environment has hence been rendered unsustainable and unadaptable through fastidious changes of present era. Some of the common hazards that can be easily spotted in the built environment are half-finished built forms, pedestrians and vehicles moving on the same part of the road. Unpaved areas on street edges. Over-sized, bright and randomly placed hoardings. Negligible trees or green spaces. The old buildings have been poorly maintained and the new ones are being constructed over them. Roads are too narrow to cater to the increasing traffic, both pedestrian and vehicular. The streets have a large variety of activities taking place on them, but haphazardly. Trees are being cut down for road widening and new constructions. There is no space for greenery in the commercial as well as old residential areas. The old infrastructure is deteriorating because of poor maintenance and the economic limitations. Pseudo understanding of functionality as well as aesthetics drive the new infrastructure. It is hence necessary to evaluate the extent of sustainability of existing built environment of the city and create or regenerate the existing built environment into a more sustainable and adaptable one. For this purpose, research titled “Sustainability of the Built Environment of Ranchi District” has been carried out. In this research the condition of the built environment of Ranchi are explored so as to figure out the problems and shortcomings existing in the city and provide for design strategies that can make the existing built-environment sustainable. The built environment of Ranchi that include its outdoor spaces like streets, parks, other open areas, its built forms as well as its users, has been analyzed in terms of various urban design parameters. Based on which strategies have been suggested to make the city environmentally, socially, culturally and economically sustainable.

Keywords: adaptable, built-environment, sustainability, urban

Procedia PDF Downloads 210
235 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 87
234 L1 Poetry and Moral Tales as a Factor Affecting L2 Acquisition in EFL Settings

Authors: Arif Ahmed Mohammed Al-Ahdal

Abstract:

Poetry, tales, and fables have always been a part of the L1 repertoire and one that takes the learners to another amazing and fascinating world of imagination. The storytelling class and the genre of poems are activities greatly enjoyed by all age groups. The very significant idea behind their inclusion in the language curriculum is to sensitize young minds to a wide range of human emotions that are believed to greatly contribute to building their social resilience, emotional stability, empathy towards fellow creatures, and literacy. Quite certainly, the learning objective at this stage is not language acquisition (though it happens as an automatic process) but getting the young learners to be acquainted with an entire spectrum of what may be called the ‘noble’ abilities of the human race. They enrich their very existence, inspiring them to unearth ‘selves’ that help them as adults and enable them to co-exist fruitfully and symbiotically with their fellow human beings. By extension, ‘higher’ training in these literature genres shows the universality of human emotions, sufferings, aspirations, and hopes. The current study is anchored on the Reader-Response-Theory in literature learning, which suggests that the reader reconstructs work and re-enacts the author's creative role. Reiteratingly, literary works provide clues or verbal symbols in a linguistic system, widely accepted by everyone who shares the language, but everyone reads their own life experiences and situations into them. The significance of words depends on the reader, even if they have a typical relationship. In every reading, there is an interaction between the reader and the text. The process of reading is an experience in which the reader tries to comprehend the literary work, which surpasses its full potential since it provides emotional and intellectual reactions that are not anticipated from the document but cannot be affirmed just by the reader as a part of the text. The idea is that the text forms the basis of a unifying experience. A reinterpretation of the literary text may transform it into a guiding principle to respond to actual experiences and personal memories. The impulses delivered to the reader vary according to poetry or texts; nevertheless, the readers differ considerably even with the same material. Previous studies confirm that poetry is a useful tool for learning a language. This present paper works on these hypotheses and proposes to study the impetus given to L2 learning as a factor of exposure to poetry and meaningful stories in L1. The driving force behind the choice of this topic is the first-hand experience that the researcher had while teaching a literary text to a group of BA students who, as a reaction to the text, initially burst into tears and ultimately turned the class into an interactive session. The study also intends to compare the performance of male and female students post intervention using pre and post-tests, apart from undertaking a detailed inquiry via interviews with college learners of English to understand how L1 literature plays a great role in the acquisition of L2.

Keywords: SLA, literary text, poetry, tales, affective factors

Procedia PDF Downloads 51
233 Characterization of Potato Starch/Guar Gum Composite Film Modified by Ecofriendly Cross-Linkers

Authors: Sujosh Nandi, Proshanta Guha

Abstract:

Synthetic plastics are preferred for food packaging due to high strength, stretch-ability, good water vapor and gas barrier properties, transparency and low cost. However, environmental pollution generated by these synthetic plastics is a major concern of modern human civilization. Therefore, use of biodegradable polymers as a substitute for synthetic non-biodegradable polymers are encouraged to be used even after considering drawbacks related to mechanical and barrier properties of the films. Starch is considered one of the potential raw material for the biodegradable polymer, encounters poor water barrier property and mechanical properties due to its hydrophilic nature. That apart, recrystallization of starch molecules occurs during aging which decreases flexibility and increases elastic modulus of the film. The recrystallization process can be minimized by blending of other hydrocolloids having similar structural compatibility, into the starch matrix. Therefore, incorporation of guar gum having a similar structural backbone, into the starch matrix can introduce a potential film into the realm of biodegradable polymer. However, hydrophilic nature of both starch and guar gum, water barrier property of the film is low. One of the prospective solution to enhance this could be modification of the potato starch/guar gum (PSGG) composite film using cross-linker. Over the years, several cross-linking agents such as phosphorus oxychloride, sodium trimetaphosphate, etc. have been used to improve water vapor permeability (WVP) of the films. However, these chemical cross-linking agents are toxic, expensive and take longer time to degrade. Therefore, naturally available carboxylic acid (tartaric acid, malonic acid, succinic acid, etc.) had been used as a cross-linker and found that water barrier property enhanced substantially. As per our knowledge, no works have been reported with tartaric acid and succinic acid as a cross-linking agent blended with the PSGG films. Therefore, the objective of the present study was to examine the changes in water vapor barrier property and mechanical properties of the PSGG films after cross-linked with tartaric acid (TA) and succinic acid (SA). The cross-linkers were blended with PSGG film-forming solution at four different concentrations (4, 8, 12 & 16%) and cast on teflon plate at 37°C for 20 h. From the fourier-transform infrared spectroscopy (FTIR) study of the developed films, a band at 1720cm-1 was observed which is attributed to the formation of ester group in the developed films. On the other hand, it was observed that tensile strength (TS) of the cross-linked film decreased compared to non-cross linked films, whereas strain at break increased by several folds. Moreover, the results depicted that tensile strength diminished with increasing the concentration of TA or SA and lowest TS (1.62 MPa) was observed for 16% SA. That apart, maximum strain at break was also observed for TA at 16% and the reason behind this could be a lesser degree of crystallinity of the TA cross-linked films compared to SA. However, water vapor permeability of succinic acid cross-linked film was reduced significantly, but it was enhanced significantly by addition of tartaric acid.

Keywords: cross linking agent, guar gum, organic acids, potato starch

Procedia PDF Downloads 89
232 Thermal Imaging of Aircraft Piston Engine in Laboratory Conditions

Authors: Lukasz Grabowski, Marcin Szlachetka, Tytus Tulwin

Abstract:

The main task of the engine cooling system is to maintain its average operating temperatures within strictly defined limits. Too high or too low average temperatures result in accelerated wear or even damage to the engine or its individual components. In order to avoid local overheating or significant temperature gradients, leading to high stresses in the component, the aim is to ensure an even flow of air. In the case of analyses related to heat exchange, one of the main problems is the comparison of temperature fields because standard measuring instruments such as thermocouples or thermistors only provide information about the course of temperature at a given point. Thermal imaging tests can be helpful in this case. With appropriate camera settings and taking into account environmental conditions, we are able to obtain accurate temperature fields in the form of thermograms. Emission of heat from the engine to the engine compartment is an important issue when designing a cooling system. Also, in the case of liquid cooling, the main sources of heat in the form of emissions from the engine block, cylinders, etc. should be identified. It is important to redesign the engine compartment ventilation system. Ensuring proper cooling of aircraft reciprocating engine is difficult not only because of variable operating range but mainly because of different cooling conditions related to the change of speed or altitude of flight. Engine temperature also has a direct and significant impact on the properties of engine oil, which under the influence of this parameter changes, in particular, its viscosity. Too low or too high, its value can be a result of fast wear of engine parts. One of the ways to determine the temperatures occurring on individual parts of the engine is the use of thermal imaging measurements. The article presents the results of preliminary thermal imaging tests of aircraft piston diesel engine with a maximum power of about 100 HP. In order to perform the heat emission tests of the tested engine, the ThermaCAM S65 thermovision monitoring system from FLIR (Forward-Looking Infrared) together with the ThermaCAM Researcher Professional software was used. The measurements were carried out after the engine warm up. The engine speed was 5300 rpm The measurements were taken for the following environmental parameters: air temperature: 17 °C, ambient pressure: 1004 hPa, relative humidity: 38%. The temperatures distribution on the engine cylinder and on the exhaust manifold were analysed. Thermal imaging tests made it possible to relate the results of simulation tests to the real object by measuring the rib temperature of the cylinders. The results obtained are necessary to develop a CFD (Computational Fluid Dynamics) model of heat emission from the engine bay. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: aircraft, piston engine, heat, emission

Procedia PDF Downloads 96
231 Implications of Social Rights Adjudication on the Separation of Powers Doctrine: Colombian Case

Authors: Mariam Begadze

Abstract:

Separation of Powers (SOP) has often been the most frequently posed objection against the judicial enforcement of socio-economic rights. Although a lot has been written to refute those, very rarely has it been assessed what effect the current practice of social rights adjudication has had on the construction of SOP doctrine in specific jurisdictions. Colombia is an appropriate case-study on this question. The notion of collaborative SOP in the 1991 Constitution has affected the court’s conception of its role. On the other hand, the trends in the jurisprudence have further shaped the collaborative notion of SOP. Other institutional characteristics of the Colombian constitutional law have played its share role as well. Tutela action, particularly flexible and fast judicial action for individuals has placed the judiciary in a more confrontational relation vis-à-vis the political branches. Later interventions through abstract review of austerity measures further contributed to that development. Logically, the court’s activism in this sphere has attracted attacks from political branches, which have turned out to be unsuccessful precisely due to court’s outreach to the middle-class, whose direct reliance on the court has turned into its direct democratic legitimacy. Only later have the structural judgments attempted to revive the collaborative notion behind SOP doctrine. However, the court-supervised monitoring process of implementation has itself manifested fluctuations in the mode of collaboration, moving into more managerial supervision recently. This is not surprising considering the highly dysfunctional political system in Colombia, where distrust seems to be the default starting point in the interaction of the branches. The paper aims to answer the question, what the appropriate judicial tools are to realize the collaborative notion of SOP in a context where the court has to strike a balance between the strong executive and the weak and largely dysfunctional legislative branch. If the recurrent abuse lies in the indifference and inaction of legislative branches to engage with political issues seriously, what are the tools in the court’s hands to activate the political process? The answer to this question partly lies in the court’s other strand of jurisprudence, in which it combines substantive objections with procedural ones concerning the operation of the legislative branch. The primary example is the decision on value-added tax on basic goods, in which the court invalidated the law based on the absence of sufficient deliberation in Congress on the question of the bills’ implications on the equity and progressiveness of the entire taxing system. The decision led to Congressional rejection of an identical bill based on the arguments put forward by the court. The case perhaps is the best illustration of the collaborative notion of SOP, in which the court refrains from categorical pronouncements, while does its bit for activating political process. This also legitimizes the court’s activism based on its role to counter the most perilous abuse in the Colombian context – failure of the political system to seriously engage with serious political questions.

Keywords: Colombian constitutional court, judicial review, separation of powers, social rights

Procedia PDF Downloads 80
230 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children

Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga

Abstract:

Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.

Keywords: executive functions, flanker task, inhibition, preschool children

Procedia PDF Downloads 229
229 Potential Assessment and Techno-Economic Evaluation of Photovoltaic Energy Conversion System: A Case of Ethiopia Light Rail Transit System

Authors: Asegid Belay Kebede, Getachew Biru Worku

Abstract:

The Earth and its inhabitants have faced an existential threat as a result of severe manmade actions. Global warming and climate change have been the most apparent manifestations of this threat throughout the world, with increasingly intense heat waves, temperature rises, flooding, sea-level rise, ice sheet melting, and so on. One of the major contributors to this disaster is the ever-increasing production and consumption of energy, which is still primarily fossil-based and emits billions of tons of hazardous GHG. The transportation industry is recognized as the biggest actor in terms of emissions, accounting for 24% of direct CO2 emissions and being one of the few worldwide sectors where CO2 emissions are still growing. Rail transportation, which includes all from light rail transit to high-speed rail services, is regarded as one of the most efficient modes of transportation, accounting for 9% of total passenger travel and 7% of total freight transit. Nonetheless, there is still room for improvement in the transportation sector, which might be done by incorporating alternative and/or renewable energy sources. As a result of these rapidly changing global energy situations and rapidly dwindling fossil fuel supplies, we were driven to analyze the possibility of renewable energy sources for traction applications. Even a small achievement in energy conservation or harnessing might significantly influence the total railway system and have the potential to transform the railway sector like never before. As a result, the paper begins by assessing the potential for photovoltaic (PV) power generation on train rooftops and existing infrastructure such as railway depots, passenger stations, traction substation rooftops, and accessible land along rail lines. As a result, a method based on a Google Earth system (using Helioscopes software) is developed to assess the PV potential along rail lines and on train station roofs. As an example, the Addis Ababa light rail transit system (AA-LRTS) is utilized. The case study examines the electricity-generating potential and economic performance of photovoltaics installed on AALRTS. As a consequence, the overall capacity of solar systems on all stations, including train rooftops, reaches 72.6 MWh per day, with an annual power output of 10.6 GWh. Throughout a 25-year lifespan, the overall CO2 emission reduction and total profit from PV-AA-LRTS can reach 180,000 tons and 892 million Ethiopian birrs, respectively. The PV-AA-LRTS has a 200% return on investment. All PV stations have a payback time of less than 13 years, and the price of solar-generated power is less than $0.08/kWh, which can compete with the benchmark price of coal-fired electricity. Our findings indicate that PV-AA-LRTS has tremendous potential, with both energy and economic advantages.

Keywords: sustainable development, global warming, energy crisis, photovoltaic energy conversion, techno-economic analysis, transportation system, light rail transit

Procedia PDF Downloads 56
228 Studies of the Reaction Products Resulted from Glycerol Electrochemical Conversion under Galvanostatic Mode

Authors: Ching Shya Lee, Mohamed Kheireddine Aroua, Wan Mohd Ashri Wan Daud, Patrick Cognet, Yolande Peres, Mohammed Ajeel

Abstract:

In recent years, with the decreasing supply of fossil fuel, renewable energy has received a significant demand. Biodiesel which is well known as vegetable oil based fatty acid methyl ester is an alternative fuel for diesel. It can be produced from transesterification of vegetable oils, such as palm oil, sunflower oil, rapeseed oil, etc., with methanol. During the transesterification process, crude glycerol is formed as a by-product, resulting in 10% wt of the total biodiesel production. To date, due to the fast growing of biodiesel production in worldwide, the crude glycerol supply has also increased rapidly and resulted in a significant price drop for glycerol. Therefore, extensive research has been developed to use glycerol as feedstock to produce various added-value chemicals, such as tartronic acid, mesoxalic acid, glycolic acid, glyceric acid, propanediol, acrolein etc. The industrial processes that usually involved are selective oxidation, biofermentation, esterification, and hydrolysis. However, the conversion of glycerol into added-value compounds by electrochemical approach is rarely discussed. Currently, the approach is mainly focused on the electro-oxidation study of glycerol under potentiostatic mode for cogenerating energy with other chemicals. The electro-organic synthesis study from glycerol under galvanostatic mode is seldom reviewed. In this study, the glycerol was converted into various added-value compounds by electrochemical method under galvanostatic mode. This work aimed to study the possible compounds produced from glycerol by electrochemical technique in a one-pot electrolysis cell. The electro-organic synthesis study from glycerol was carried out in a single compartment reactor for 8 hours, over the platinum cathode and anode electrodes under acidic condition. Various parameters such as electric current (1.0 A to 3.0 A) and reaction temperature (27 °C to 80 °C) were evaluated. The products obtained were characterized by using gas chromatography-mass spectroscopy equipped with an aqueous-stable polyethylene glycol stationary phase column. Under the optimized reaction condition, the glycerol conversion achieved as high as 95%. The glycerol was successfully converted into various added-value chemicals such as ethylene glycol, glycolic acid, glyceric acid, acetaldehyde, formic acid, and glyceraldehyde; given the yield of 1%, 45%, 27%, 4%, 0.7% and 5%, respectively. Based on the products obtained from this study, the reaction mechanism of this process is proposed. In conclusion, this study has successfully converted glycerol into a wide variety of added-value compounds. These chemicals are found to have high market value; they can be used in the pharmaceutical, food and cosmetic industries. This study effectively opens a new approach for the electrochemical conversion of glycerol. For further enhancement on the product selectivity, electrode material is an important parameter to be considered.

Keywords: biodiesel, glycerol, electrochemical conversion, galvanostatic mode

Procedia PDF Downloads 173
227 Detection of Egg Proteins in Food Matrices (2011-2021)

Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli

Abstract:

Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.

Keywords: allergens, food, egg proteins, immunoassay

Procedia PDF Downloads 111
226 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route

Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku

Abstract:

Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.

Keywords: inducing labor, misoprostol, pregnancy termination, third trimester

Procedia PDF Downloads 153
225 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 91
224 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 497
223 Re-Presenting the Egyptian Informal Urbanism in Films between 1994 and 2014

Authors: R. Mofeed, N. Elgendy

Abstract:

Cinema constructs mind-spaces that reflect inherent human thoughts and emotions. As a representational art, Cinema would introduce comprehensive images of life phenomena in different ways. The term “represent” suggests verity of meanings; bring into presence, replace or typify. In that sense, Cinema may present a phenomenon through direct embodiment, or introduce a substitute image that replaces the original phenomena, or typify it by relating the produced image to a more general category through a process of abstraction. This research is interested in questioning the type of images that Egyptian Cinema introduces to informal urbanism and how these images were conditioned and reshaped in the last twenty years. The informalities/slums phenomenon first appeared in Egypt and, particularly, Cairo in the early sixties, however, this phenomenon was completely ignored by the state and society until the eighties, and furthermore, its evident representation in Cinema was by the mid-nineties. The Informal City represents the illegal housing developments, and it is a fast growing form of urbanization in Cairo. Yet, this expanding phenomenon is still depicted as the minority, exceptional and marginal through the Cinematic lenses. This paper aims at tracing the forms of representations of the urban informalities in the Egyptian Cinema between 1994 and 2014, and how did that affect the popular mind and its perception of these areas. The paper runs two main lines of inquiry; the first traces the phenomena through a chronological and geographical mapping of the informal urbanism has been portrayed in films. This analysis is based on an academic research work at Cairo University in Fall 2014. The visual tracing through maps and timelines allowed a reading of the phases of ignorance, presence, typifying and repetition in the representation of this huge sector of the city through more than 50 films that has been investigated. The analysis clearly revealed the “portrayed image” of informality by the Cinema through the examined period. However, the second part of the paper explores the “perceived image”. A designed questionnaire is applied to highlight the main features of that image that is perceived by both inhabitants of informalities and other Cairenes based on watching selected films. The questionnaire covers the different images of informalities proposed in the Cinema whether in a comic or a melodramatic background and highlight the descriptive terms used, to see which of them resonate with the mass perceptions and affected their mental images. The two images; “portrayed” and “perceived” are then to be encountered to reflect on issues of repetitions, stereotyping and reality. The formulated stereotype of informal urbanism is finally outlined and justified in relation to both production consumption mechanisms of films and the State official vision of informalities.

Keywords: cinema, informal urbanism, popular mind, representation

Procedia PDF Downloads 271
222 Fire Safe Medical Oxygen Delivery for Aerospace Environments

Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl

Abstract:

Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.

Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption

Procedia PDF Downloads 73
221 Characterization of Bio-Inspired Thermoelastoplastic Composites Filled with Modified Cellulose Fibers

Authors: S. Cichosz, A. Masek

Abstract:

A new cellulose hybrid modification approach, which is undoubtedly a scientific novelty, is introduced. The study reports the properties of cellulose (Arbocel UFC100 – Ultra Fine Cellulose) and characterizes cellulose filled polymer composites based on an ethylene-norbornene copolymer (TOPAS Elastomer E-140). Moreover, the approach of physicochemical two-stage cellulose treatment is introduced: solvent exchange (to ethanol or hexane) and further chemical modification with maleic anhydride (MA). Furthermore, the impact of the drying process on cellulose properties was investigated. Suitable measurements were carried out to characterize cellulose fibers: spectroscopic investigation (Fourier Transform Infrared Spektrofotometer-FTIR, Near InfraRed spectroscopy-NIR), thermal analysis (Differential scanning calorimetry, Thermal gravimetric analysis ) and Karl Fischer titration. It should be emphasized that for all UFC100 treatments carried out, a decrease in moisture content was evidenced. FT-IR reveals a drop in absorption band intensity at 3334 cm-1, the peak is associated with both –OH moieties and water. Similar results were obtained with Karl Fischer titration. Based on the results obtained, it may be claimed that the employment of ethanol contributes greatly to the lowering of cellulose water absorption ability (decrease of moisture content to approximately 1.65%). Additionally, regarding polymer composite properties, crucial data has been obtained from the mechanical and thermal analysis. The highest material performance was noted in the case of the composite sample that contained cellulose modified with MA after a solvent exchange with ethanol. This specimen exhibited sufficient tensile strength, which is almost the same as that of the neat polymer matrix – in the region of 40 MPa. Moreover, both the Payne effect and filler efficiency factor, calculated based on dynamic mechanical analysis (DMA), reveal the possibility of the filler having a reinforcing nature. What is also interesting is that, according to the Payne effect results, fibers dried before the further chemical modification are assumed to allow more regular filler structure development in the polymer matrix (Payne effect maximum at 1.60 MPa), compared with those not dried (Payne effect in the range 0.84-1.26 MPa). Furthermore, taking into consideration the data gathered from DSC and TGA, higher thermal stability is obtained in case of the materials filled with fibers that were dried before the carried out treatments (degradation activation energy in the region of 195 kJ/mol) in comparison with the polymer composite samples filled with unmodified cellulose (degradation activation energy of approximately 180 kJ/mol). To author’s best knowledge this work results in the introduction of a novel, new filler hybrid treatment approach. Moreover, valuable data regarding the properties of composites filled with cellulose fibers of various moisture contents have been provided. It should be emphasized that plant fiber-based polymer bio-materials described in this research might contribute significantly to polymer waste minimization because they are more readily degraded.

Keywords: cellulose fibers, solvent exchange, moisture content, ethylene-norbornene copolymer

Procedia PDF Downloads 95
220 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches

Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli

Abstract:

Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).

Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR

Procedia PDF Downloads 66
219 Circulating Public Perception on Agroforestry: Discourse Networks Analysis Using Social Media and Online News Media in Four Countries of the Sahel Region

Authors: Luisa Müting, Wisnu Harto Adiwijoyo

Abstract:

Agroforestry systems transform the agricultural landscapes in the Sahel region of Africa, providing food and farming products consumed for subsistence or sold for income. In the incrementally dry climate of the Sahel region, the spreading of agroforestry practices is integral for policymaker efforts to counteract land degradation and provide soil restoration in the region. Several measures on agroforestry practices have been implemented in the region by governmental and non-governmental institutions in recent years. However, despite the efforts, past research shows that awareness of how policies and interventions are being consumed and perceived by the public remains low. Therefore, interpreting public policy dilemmas by analyzing the public perception regarding agroforestry concepts and practices is necessary. Public perceptions and discourses can be an essential driver or constraint for the adoption of agroforestry practices in the region. Thus, understanding the public discourse behavior of crucial stakeholders could assist policymakers in developing inclusive and contextual policies that are relevant to the context of agroforestry adoption in Sahel region. To answer how information about agroforestry spreads and is perceived by the public. As internet usage increased drastically over the past decade, reaching a share of 33 percent of the population being connected to the internet, this research is based on online conversation data. Social media data from Facebook are gathered daily between April 2021 and April 2022 in Djibouti, Senegal, Mali, and Nigeria based on their share of active internet users compared to other countries in the Sahel region. A systematic methodology was applied to the extracted social media using discourse network analysis (DNA). This study then clustered the data by the types of agroforestry practices, sentiments, and country. Additionally, this research extracted the text data from online news media during the same period to pinpoint events related to the topic of agroforestry. The preliminary result indicates that tree management, crops, and livestock integration, diversifying species and genetic resources, and focusing on interactions and productivity across the agricultural system; are the most notable keywords in agroforestry-related conversations within the four countries in the Sahel region. Additionally, approximately 84 percent of the discussions were still dominated by big actors, such as NGO or government actors. Furthermore, as a subject of communication within agroforestry discourse, the Great Green Wall initiative generates almost 60 percent positive sentiment within the captured social media data, effectively having a more significant outreach than general agroforestry topics. This study provides an understanding for scholars and policymakers with a springboard for further research or policy design on agroforestry in the four countries of the Sahel region with systematically uncaptured novel data from the internet.

Keywords: sahel, djibouti, senegal, mali, nigeria, social networks analysis, public discourse analysis, sentiment analysis, content analysis, social media, online news, agroforestry, land restoration

Procedia PDF Downloads 67
218 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 357
217 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations

Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis

Abstract:

Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.

Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling

Procedia PDF Downloads 234
216 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency

Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader

Abstract:

Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.

Keywords: immobilization, reusability, thermozymes, xylanase

Procedia PDF Downloads 355
215 Biophysical Analysis of the Interaction of Polymeric Nanoparticles with Biomimetic Models of the Lung Surfactant

Authors: Weiam Daear, Patrick Lai, Elmar Prenner

Abstract:

The human body offers many avenues that could be used for drug delivery. The pulmonary route, which is delivered through the lungs, presents many advantages that have sparked interested in the field. These advantages include; 1) direct access to the lungs and the large surface area it provides, and 2) close proximity to the blood circulation. The air-blood barrier of the alveoli is about 500 nm thick. The air-blood barrier consist of a monolayer of lipids and few proteins called the lung surfactant and cells. This monolayer consists of ~90% lipids and ~10% proteins that are produced by the alveolar epithelial cells. The two major lipid classes constitutes of various saturation and chain length of phosphatidylcholine (PC) and phosphatidylglycerol (PG) representing 80% of total lipid component. The major role of the lung surfactant monolayer is to reduce surface tension experienced during breathing cycles in order to prevent lung collapse. In terms of the pulmonary drug delivery route, drugs pass through various parts of the respiratory system before reaching the alveoli. It is at this location that the lung surfactant functions as the air-blood barrier for drugs. As the field of nanomedicine advances, the use of nanoparticles (NPs) as drug delivery vehicles is becoming very important. This is due to the advantages NPs provide with their large surface area and potential specific targeting. Therefore, studying the interaction of NPs with lung surfactant and whether they affect its stability becomes very essential. The aim of this research is to develop a biomimetic model of the human lung surfactant followed by a biophysical analysis of the interaction of polymeric NPs. This biomimetic model will function as a fast initial mode of testing for whether NPs affect the stability of the human lung surfactant. The model developed thus far is an 8-component lipid system that contains major PC and PG lipids. Recently, a custom made 16:0/16:1 PC and PG lipids were added to the model system. In the human lung surfactant, these lipids constitute 16% of the total lipid component. According to the author’s knowledge, there is not much monolayer data on the biophysical analysis of the 16:0/16:1 lipids, therefore more analysis will be discussed here. Biophysical techniques such as the Langmuir Trough is used for stability measurements which monitors changes to a monolayer's surface pressure upon NP interaction. Furthermore, Brewster Angle Microscopy (BAM) employed to visualize changes to the lateral domain organization. Results show preferential interactions of NPs with different lipid groups that is also dependent on the monolayer fluidity. Furthermore, results show that the film stability upon compression is unaffected, but there are significant changes in the lateral domain organization of the lung surfactant upon NP addition. This research is significant in the field of pulmonary drug delivery. It is shown that NPs within a certain size range are safe for the pulmonary route, but little is known about the mode of interaction of those polymeric NPs. Moreover, this work will provide additional information about the nanotoxicology of NPs tested.

Keywords: Brewster angle microscopy, lipids, lung surfactant, nanoparticles

Procedia PDF Downloads 149
214 Exploring the Career Experiences of Internationally Recruited Nurses at the Royal Berkshire NHS Foundation Trust

Authors: Natalie Preville, Carlos Joel Mejia-Olivares

Abstract:

In the UK, since the early 1950s when the NHS was founded, international staff in the NHS have played an important role. Currently, they represent 16% of the workforce within the NHS in the UK. Furthermore, to address the shortfalls in nursing staff, international recruitment programs have been essential to reduce the gaps in the UK nursing workforce over the last two decades. The NHS Long Term Plan (2019) aims to have a significant reduction of nursing vacancies to 5% by 2028. However, in 2021 and 2022, Workforce Race Equality Standards (WRES) reports stated that there is inequitable Career Progression (CP) among Internationally Recruited (IR) nurses as compared to British counterparts. In addition, there is sufficient literature exploring the motives and lived experiences of IR nurses, which underpins the findings. Therefore, the overall aim of this report is to conduct a scoping project to understand the experiences of the IR nurses who joined the NHS in the South East of England within the last 5 years. Methodology- This document is based on the data from a survey developed by Royal Berkshire NHS Foundation Trust using Microsoft forms and consisted of 23 questions divided into four themes, staff background, career experience, career progression and future career plans within Royal Berkshire NHS Foundation Trust. The descriptive analysis provided the initial analysis of the quantitative data. As a result, 44 responses were collected and evaluated by utilising Microsoft excel. Key findings: Career experiences; 72% of respondents felt that their current role was a good fit, and in a subsequent question, the main reason cited was having “relevant skills”. This indicates that, for the most part, the prior experience of IR nurses is a large factor in their placement, which is viewed positively; the next step is to effectively apply similar relevance in aligning prior experience with career progression opportunities. Moreover, 67% of respondents feel valued by the department/team, which is a great reflection of the values of the Trust being demonstrated towards IR Nurses. However, further studies may be necessary to explore the reasons why the remaining 33% may not feel valued; this can include having a better understanding of cultural perceptions of value. Perceived Barriers: Although 37% of respondents had been promoted since commencing employment with the Trust, the data indicates that there is still room for CP opportunities, as it is the leading barrier reported by the respondents. Secondly, the growing mix of cultures within the nursing workforce gives the appearance of inclusion. However, this is not the experience of some IR nurses. Conclusion statemen: Survey results indicate that this NHS Trust has an excellent foundation to integrate international nurses into their workforce with scope for career progression in a reasonable timeframe. However, it would be recommendable to include fast-tracking career promotions by recognizing previous studies and professional experience. Further exploration of staff career experiences and goals may provide additional useful data for future planning.

Keywords: career progression, International nurses, perceived barriers, staff survey

Procedia PDF Downloads 52
213 Mycotoxin Bioavailability in Sparus Aurata Muscle After Human Digestion and Intestinal Transport (Caco-2/HT-29 Cells) Simulation

Authors: Cheila Pereira, Sara C. Cunha, Miguel A. Faria, José O. Fernandes

Abstract:

The increasing world population brings several concerns, one of which is food security and sustainability. To meet this challenge, aquaculture, the farming of aquatic animals and plants, including fish, mollusks, bivalves, and algae, has experienced sustained growth and development in recent years. Recent advances in this industry have focused on reducing its economic and environmental costs, for example, the substitution of protein sources in fish feed. Plant-based proteins are now a common approach, and while it is a greener alternative to animal-based proteins, there are some disadvantages, such as their putative content and intoxicants such as mycotoxins. These are naturally occurring plant contaminants, and their exposure in fish can cause health problems, stunted growth or even death, resulting in economic losses for the producers and health concerns for the consumers. Different works have demonstrated the presence of both AFB1 (aflatoxin B1) and ENNB1 (enniatin B1) in fish feed and their capacity to be absorbed and bioaccumulate in the fish organism after digestion, further reaching humans through fish ingestion. The aim of this work was to evaluate the bioaccessibility of both mycotoxins in samples of Sparus aurata muscle using a static digestion model based on the INFOGEST protocol. The samples were subjected to different cooking procedures – raw, grilled and fried – and different seasonings – none, thyme and ginger – in order to evaluate their potential reduction effect on mycotoxins bioaccessibility, followed by the evaluation of the intestinal transport of both compounds with an in vitro cell model composed of Caco-2/HT-29 co-culture monolayers, simulating the human intestinal epithelium. The bioaccessible fractions obtained in the digestion studies were used in the transport studies for a more realistic approach to bioavailability evaluation. Results demonstrated the effect of the use of different cooking procedures and seasoning on the toxin's bioavailability. Sparus aurata was chosen in this study for its large production in aquaculture and high consumption in Europe. Also, with the continued evolution of fish farming practices and more common usage of novel feed ingredients based on plants, there is a growing concern about less studied contaminants in aquaculture and their consequences for human health. In pair with greener advances in this industry, there is a convergence towards alternative research methods, such as in vitro applications. In the case of bioavailability studies, both in vitro digestion protocols and intestinal transport assessment are excellent alternatives to in vivo studies. These methods provide fast, reliable and comparable results without ethical restraints.

Keywords: AFB1, aquaculture, bioaccessibility, ENNB1, intestinal transport.

Procedia PDF Downloads 27