Search results for: the degree of conversion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3900

Search results for: the degree of conversion

690 Comparison of Power Generation Status of Photovoltaic Systems under Different Weather Conditions

Authors: Zhaojun Wang, Zongdi Sun, Qinqin Cui, Xingwan Ren

Abstract:

Based on multivariate statistical analysis theory, this paper uses the principal component analysis method, Mahalanobis distance analysis method and fitting method to establish the photovoltaic health model to evaluate the health of photovoltaic panels. First of all, according to weather conditions, the photovoltaic panel variable data are classified into five categories: sunny, cloudy, rainy, foggy, overcast. The health of photovoltaic panels in these five types of weather is studied. Secondly, a scatterplot of the relationship between the amount of electricity produced by each kind of weather and other variables was plotted. It was found that the amount of electricity generated by photovoltaic panels has a significant nonlinear relationship with time. The fitting method was used to fit the relationship between the amount of weather generated and the time, and the nonlinear equation was obtained. Then, using the principal component analysis method to analyze the independent variables under five kinds of weather conditions, according to the Kaiser-Meyer-Olkin test, it was found that three types of weather such as overcast, foggy, and sunny meet the conditions for factor analysis, while cloudy and rainy weather do not satisfy the conditions for factor analysis. Therefore, through the principal component analysis method, the main components of overcast weather are temperature, AQI, and pm2.5. The main component of foggy weather is temperature, and the main components of sunny weather are temperature, AQI, and pm2.5. Cloudy and rainy weather require analysis of all of their variables, namely temperature, AQI, pm2.5, solar radiation intensity and time. Finally, taking the variable values in sunny weather as observed values, taking the main components of cloudy, foggy, overcast and rainy weather as sample data, the Mahalanobis distances between observed value and these sample values are obtained. A comparative analysis was carried out to compare the degree of deviation of the Mahalanobis distance to determine the health of the photovoltaic panels under different weather conditions. It was found that the weather conditions in which the Mahalanobis distance fluctuations ranged from small to large were: foggy, cloudy, overcast and rainy.

Keywords: fitting, principal component analysis, Mahalanobis distance, SPSS, MATLAB

Procedia PDF Downloads 139
689 Reading Strategy Instruction in Secondary Schools in China

Authors: Leijun Zhang

Abstract:

Reading literacy has become a powerful tool for academic success and an essential goal of education. The ability to read is not only fundamental for pupils’ academic success but also a prerequisite for successful participation in today’s vastly expanding multi-literate textual environment. It is also important to recognize that, in many educational settings, students are expected to learn a foreign/second language for successful participation in the increasingly globalized world. Therefore, it is crucial to help learners become skilled foreign-language readers. Research indicates that students’ reading comprehension can be significantly improved through explicit instruction of multiple reading strategies. Despite the wealth of research on how to enhance learners’ reading comprehension achievement by identifying an enormous range of reading strategies and techniques for assisting students in comprehending specific texts, relatively scattered studies have centered on whether these reading comprehension strategies and techniques are used in classrooms, especially in Chinese academic settings. Given the central role of ‘the teacher’ in reading instruction, the study investigates the degree of importance that EFL teachers attach to reading comprehension strategies and their classroom employment of those strategies in secondary schools in China. It also explores the efficiency of reading strategy instruction on pupils’ reading comprehension performance. As a mix-method study, the analysis drew on data from a quantitative survey and interviews with seven teachers. The study revealed that the EFL teachers had positive attitudes toward the use of cognitive strategies despite their insufficient knowledge about and limited attention to the metacognitive strategies and supporting strategies. Regarding the selection of reading strategies for instruction, the mandated curriculum and high-stakes examinations, text features and demands, teaching preparation programs and their own EFL reading experiences were the major criteria in their responses, while few teachers took into account the learner needs in their choice of reading strategies. Although many teachers agreed upon the efficiency of reading strategy instruction in developing students’ reading comprehension competence, three challenges were identified in their implementation of the strategy instruction. The study provides some insights into reading strategy instruction in EFL contexts and proposes implications for curriculum innovation, teacher professional development, and reading instruction research.

Keywords: reading comprehension strategies, EFL reading instruction, language teacher cognition, teacher education

Procedia PDF Downloads 84
688 Insights into the Annotated Genome Sequence of Defluviitoga tunisiensis L3 Isolated from a Thermophilic Rural Biogas Producing Plant

Authors: Irena Maus, Katharina Gabriella Cibis, Andreas Bremges, Yvonne Stolze, Geizecler Tomazetto, Daniel Wibberg, Helmut König, Alfred Pühler, Andreas Schlüter

Abstract:

Within the agricultural sector, the production of biogas from organic substrates represents an economically attractive technology to generate bioenergy. Complex consortia of microorganisms are responsible for biomass decomposition and biogas production. Recently, species belonging to the phylum Thermotogae were detected in thermophilic biogas-production plants utilizing renewable primary products for biomethanation. To analyze adaptive genome features of representative Thermotogae strains, Defluviitoga tunisiensis L3 was isolated from a rural thermophilic biogas plant (54°C) and completely sequenced on an Illumina MiSeq system. Sequencing and assembly of the D. tunisiensis L3 genome yielded a circular chromosome with a size of 2,053,097 bp and a mean GC content of 31.38%. Functional annotation of the complete genome sequence revealed that the thermophilic strain L3 encodes several genes predicted to facilitate growth of this microorganism on arabinose, galactose, maltose, mannose, fructose, raffinose, ribose, cellobiose, lactose, xylose, xylan, lactate and mannitol. Acetate, hydrogen (H2) and carbon dioxide (CO2) are supposed to be end products of the fermentation process. The latter gene products are metabolites for methanogenic archaea, the key players in the final step of the anaerobic digestion process. To determine the degree of relatedness of dominant biogas community members within selected digester systems to D. tunisiensis L3, metagenome sequences from corresponding communities were mapped on the L3 genome. These fragment recruitments revealed that metagenome reads originating from a thermophilic biogas plant covered 95% of D. tunisiensis L3 genome sequence. In conclusion, availability of the D. tunisiensis L3 genome sequence and insights into its metabolic capabilities provide the basis for biotechnological exploitation of genome features involved in thermophilic fermentation processes utilizing renewable primary products.

Keywords: genome sequence, thermophilic biogas plant, Thermotogae, Defluviitoga tunisiensis

Procedia PDF Downloads 491
687 Aristotle’s Notion of Prudence as Panacea to the Leadership Crisis in Nigeria

Authors: Wogu Ikedinachi Ayodele Power, Agbude Godwyns, Eniayekon Eugenia, Nchekwube Excellence-Oluye, Abasilim Ugochukwu David

Abstract:

Contemporary ethicists and writers on leadership, in their quest to address the problem of leadership crisis in Nigeria, have identified the absence of practical prudence -which manifests in variables such as corruption, ethnicity and greed- as one of the major factors which breeds leadership crises. These variables are further fuelled by the lack of a consistent theory of leadership among scholars that could guide the pertinent actions of political leaders, hence the rising cases of leadership crises in the country. The theoretical framework that guides this study emanates from Aristotle’s notion of prudence as contained in the Nicomachean Ethics, which states that prudence is a central moral resource for political leaders. The method of conceptual analysis shall be used to clarify the concepts of virtue, prudence and leadership. The traditional method of critical analysis and the reconstructive method of ideas in philosophy are used to conceptually and contextually analyze all relevant texts and archival materials in the subject areas of this study. The study identifies a high degree of ideological bias and logical inconsistencies inherent in the theories of leadership proposed by the realist and the moralist schools of thought. The conflicting ideologies regarding what political leadership should be among scholars of leadership is identified as one of the major factors militating against ascertaining a practicable theory of leadership, which has the capacity to guide the pertinent actions of political leaders all over the world. This paper therefore identifies the absence of practical prudence, ‘wisdom’, as the major factor associated with leadership crises in Nigeria. We therefore argue that only prudent leaders will have the capacity to identify salient aspects of political situations which leaders have obligations to consider before making political decisions. Seven frameworks were prescribed from Aristotle’s Notion of prudence to strengthen this position, they include: Disciplined reason and openness to experience; Foresight and attention to the long term, among others. We submit that leadership devoid of crisis can be attained through the application of the virtue of prudence. Where this theory is adopted, it should eliminate further leadership crises in Nigeria.

Keywords: Aristotle, leadership crisis, political leadership, prudence

Procedia PDF Downloads 379
686 The Challenges of Digital Crime Nowadays

Authors: Bendes Ákos

Abstract:

Digital evidence will be the most widely used type of evidence in the future. With the development of the modern world, more and more new types of crimes have evolved and transformed. For this reason, it is extremely important to examine these types of crimes in order to get a comprehensive picture of them, with which we can help the authorities work. In 1865, with early technologies, people were able to forge a picture of a quality that is not even recognized today. With the help of today's technology, authorities receive a lot of false evidence. Officials are not able to process such a large amount of data, nor do they have the necessary technical knowledge to get a real picture of the authenticity of the given evidence. The digital world has many dangers. Unfortunately, we live in an age where we must protect everything digitally: our phones, our computers, our cars, and all the smart devices that are present in our personal lives and this is not only a burden on us, since companies, state and public utilities institutions are also forced to do so. The training of specialists and experts is essential so that the authorities can manage the incoming digital evidence at some level. When analyzing evidence, it is important to be able to examine it from the moment it is created. Establishing authenticity is a very important issue during official procedures. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. Otherwise, they will not have sufficient probative value and in case of doubt, the court will always decide in favor of the defendant. One of the most common problems in the world of digital data and evidence is doubt, which is why it is extremely important to examine the above-mentioned problems. The most effective way to avoid digital crimes is to prevent them, for which proper education and knowledge are essential. The aim is to present the dangers inherent in the digital world and the new types of digital crimes. After the comparison of the Hungarian investigative techniques with international practice, modernizing proposals will be given. A sufficiently stable yet flexible legislation is needed that can monitor the rapid changes in the world and not regulate afterward but rather provide an appropriate framework. It is also important to be able to distinguish between digital and digitalized evidence, as the degree of probative force differs greatly. The aim of the research is to promote effective international cooperation and uniform legal regulation in the world of digital crimes.

Keywords: digital crime, digital law, cyber crime, international cooperation, new crimes, skepticism

Procedia PDF Downloads 58
685 Tools and Techniques in Risk Assessment in Public Risk Management Organisations

Authors: Atousa Khodadadyan, Gabe Mythen, Hirbod Assa, Beverley Bishop

Abstract:

Risk assessment and the knowledge provided through this process is a crucial part of any decision-making process in the management of risks and uncertainties. Failure in assessment of risks can cause inadequacy in the entire process of risk management, which in turn can lead to failure in achieving organisational objectives as well as having significant damaging consequences on populations affected by the potential risks being assessed. The choice of tools and techniques in risk assessment can influence the degree and scope of decision-making and subsequently the risk response strategy. There are various available qualitative and quantitative tools and techniques that are deployed within the broad process of risk assessment. The sheer diversity of tools and techniques available to practitioners makes it difficult for organisations to consistently employ the most appropriate methods. This tools and techniques adaptation is rendered more difficult in public risk regulation organisations due to the sensitive and complex nature of their activities. This is particularly the case in areas relating to the environment, food, and human health and safety, when organisational goals are tied up with societal, political and individuals’ goals at national and international levels. Hence, recognising, analysing and evaluating different decision support tools and techniques employed in assessing risks in public risk management organisations was considered. This research is part of a mixed method study which aimed to examine the perception of risk assessment and the extent to which organisations practise risk assessment’ tools and techniques. The study adopted a semi-structured questionnaire with qualitative and quantitative data analysis to include a range of public risk regulation organisations from the UK, Germany, France, Belgium and the Netherlands. The results indicated the public risk management organisations mainly use diverse tools and techniques in the risk assessment process. The primary hazard analysis; brainstorming; hazard analysis and critical control points were described as the most practiced risk identification techniques. Within qualitative and quantitative risk analysis, the participants named the expert judgement, risk probability and impact assessment, sensitivity analysis and data gathering and representation as the most practised techniques.

Keywords: decision-making, public risk management organisations, risk assessment, tools and techniques

Procedia PDF Downloads 276
684 Genetic Diversity of Wild Population of Heterobranchus Spp. Based on Mitochondria DNA Cytochrome C Oxidase Subunit I Gene Analysis

Authors: M. Y. Abubakar, Ipinjolu J. K., Yuzine B. Esa, Magawata I., Hassan W. A., Turaki A. A.

Abstract:

Catfish (Heterobranchus spp.) is a major freshwater fish that are widely distributed in Nigeria waters and are gaining rapid aquaculture expansion. However, indiscriminate artificial crossbreeding of the species with others poses a threat to their biodiversity. There is a paucity of information about the genetic variability, hence this insight on the genetic variability is badly needed, not only for the species conservation but for aquaculture expansion. In this study, we tested the level of Genetic diversity, population differentiation and phylogenetic relationship analysis on 35 individuals of two populations of Heterobranchus bidorsalis and 29 individuals of three populations of Heterobranchus longifilis using the mitochondrial cytochrome c oxidase subunit I (mtDNA COI) gene sequence. Nucleotide sequences of 650 bp fragment of the COI gene of the two species were compared. In the whole 4 and 5 haplotypes were distinguished in the populations of H. bidorsalis & H. longifilis with accession numbers (MG334168 - MG334171 & MG334172 to MG334176) respectively. Haplotypes diversity indices revealed a range of 0.59 ± 0.08 to 0.57 ± 0.09 in H. bidorsalis and 0.000 to 0.001051 ± 0.000945 in H. longifilis population, respectively. Analysis of molecular variance (AMOVA) revealed no significant variation among H. bidorsalis population of the Niger & Benue Rivers, detected significant genetic variation was between the Rivers of Niger, Kaduna and Benue population of H. longifilis. Two main clades were recovered, showing a clear separation between H. bidorsalis and H. longifilis in the phylogenetic tree. The mtDNA COI genes studied revealed high gene flow between populations with no distinct genetic differentiation between the populations as measured by the fixation index (FST) statistic. However, a proportion of population-specific haplotypes was observed in the two species studied, suggesting a substantial degree of genetic distinctiveness for each of the population investigated. These findings present the description of the species character and accessions of the fish’s genetic resources, through gene sequence submitted in Genetic database. The data will help to protect their valuable wild resource and contribute to their recovery and selective breeding in Nigeria.

Keywords: AMOVA, genetic diversity, Heterobranchus spp., mtDNA COI, phylogenetic tree

Procedia PDF Downloads 134
683 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 90
682 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 275
681 Exploring the In-Between: An Examination of the Contextual Factors That Impact How Young Children Come to Value and Use the Visual Arts in Their Learning and Lives

Authors: S. Probine

Abstract:

The visual arts have been proven to be a central means through which young children can communicate their ideas, reflect on experience, and construct new knowledge. Despite this, perceptions of, and the degree to which the visual arts are valued within education, vary widely within political, educational, community and family contexts. These differing perceptions informed my doctoral research project, which explored the contextual factors that affect how young children come to value and use the visual arts in their lives and learning. The qualitative methodology of narrative inquiry with inclusion of arts-based methods was most appropriate for this inquiry. Using a sociocultural framework, the stories collected were analysed through the sociocultural theories of Lev Vygotsky as well as the work of Urie Bronfenbrenner, together with postmodern theories about identity formation. The use of arts-based methods such as teacher’s reflective art journals and the collection of images by child participants and their parent/caregivers allowed the research participants to have a significant role in the research. Three early childhood settings at which the visual arts were deeply valued as a meaning-making device in children’s learning, were purposively selected to be involved in the research. At each setting, the study found a unique and complex web of influences and interconnections, which shaped how children utilised the visual arts to mediate their thinking. Although the teachers' practices at all three centres were influenced by sociocultural theories, each settings' interpretations of these theories were unique and resulted in innovative interpretations of the role of the teacher in supporting visual arts learning. These practices had a significant impact on children’s experiences of the visual arts. For many of the children involved in this study, visual art was the primary means through which they learned. The children in this study used visual art to represent their experiences, relationships, to explore working theories, their interests (including those related to popular culture), to make sense of their own and other cultures, and to enrich their imaginative play. This research demonstrates that teachers have fundamental roles in fostering and disseminating the importance of the visual arts within their educational communities.

Keywords: arts-based methods, early childhood education, teacher's visual arts pedagogies, visual arts

Procedia PDF Downloads 136
680 The Spatial and Temporal Distribution of Ambient Benzene, Toluene, Ethylbenzene and Xylene Concentrations at an International Airport in South Africa

Authors: Ryan S. Johnson, Raeesa Moolla

Abstract:

Airports are known air pollution hotspots due to the variety of fuel driven activities that take place within the confines of them. As such, people working within airports are particularly vulnerable to exposure of hazardous air pollutants, including hundreds of aromatic hydrocarbons, and more specifically a group of compounds known as BTEX (viz. benzene, toluene, ethyl-benzene and xylenes). These compounds have been identified as being harmful to human and environmental health. Through the use of passive and active sampling methods, the spatial and temporal variability of benzene, toluene, ethyl-benzene and xylene concentrations within the international airport was investigated. Two sampling campaigns were conducted. In order to quantify the temporal variability of concentrations within the airport, an active sampling strategy using the Synspec Spectras Gas Chromatography 955 instrument was used. Furthermore, a passive sampling campaign, using Radiello Passive Samplers was used to quantify the spatial variability of these compounds. In addition, meteorological factors are known to affect the dispersal and dilution of pollution. Thus a Davis Pro-Weather 2 station was utilised in order to measure in situ weather parameters (viz. wind speed, wind direction and temperature). Results indicated that toluene varied on a daily, temporal scale considerably more than other concentrations. Toluene further exhibited a strong correlation with regards to the meteorological parameters, inferring that toluene was affected by these parameters to a greater degree than the other pollutants. The passive sampling campaign revealed BTEXtotal concentrations ranged between 12.95 – 124.04 µg m-3. From the results obtained it is clear that benzene, toluene, ethyl-benzene and xylene concentrations are heterogeneously spatially dispersed within the airport. Due to the slow wind speeds recorded over the passive sampling campaign (1.13 m s-1.), the hotspots were located close to the main concentration sources. The most significant hotspot was located over the main apron of the airport. It is recommended that further, extensive investigations into the seasonality of hazardous air pollutants at the airport is necessary in order for sound conclusions to be made about the temporal and spatial distribution of benzene, toluene, ethyl-benzene and xylene concentrations within the airport.

Keywords: airport, air pollution hotspot, BTEX concentrations, meteorology

Procedia PDF Downloads 197
679 Carbon Footprint Assessment and Application in Urban Planning and Geography

Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim

Abstract:

Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.

Keywords: carbon footprint, case study, geography, urban planning

Procedia PDF Downloads 285
678 The Power of Inferences and Assumptions: Using a Humanities Education Approach to Help Students Learn to Think Critically

Authors: Randall E. Osborne

Abstract:

A four-step ‘humanities’ thought model has been used in an interdisciplinary course for almost two decades and has been proven to aid in student abilities to become more inclusive in their world view. Lack of tolerance for ambiguity can interfere with this progression so we developed an assignment that seems to have assisted students in developing more tolerance for ambiguity and, therefore, opened them up to make more progress on the critical thought model. A four-step critical thought model (built from a humanities education approach) is used in an interdisciplinary course on prejudice, discrimination, and hate in an effort to minimize egocentrism and promote sociocentrism in college students. A fundamental barrier to this progression is a lack of tolerance for ambiguity. The approach to the course is built on the assumption that Tolerance for Ambiguity (characterized by a dislike of uncertain, ambiguous or situations in which expected behaviors are uncertain, will like serve as a barrier (if tolerance is low) or facilitator (if tolerance is high) of active ‘engagement’ with assignments. Given that active engagement with course assignments would be necessary to promote an increase in critical thought and the degree of multicultural attitude change, tolerance for ambiguity inhibits critical thinking and, ultimately multicultural attitude change. As expected, those students showing the least amount of decrease (or even an increase) in intolerance across the semester, earned lower grades in the course than those students who showed a significant decrease in intolerance, t(1,19) = 4.659, p < .001. Students who demonstrated the most change in their Tolerance for Ambiguity (showed an increasing ability to tolerate ambiguity) earned the highest grades in the course. This is, especially, significant because faculty did not know student scores on this measure until after all assignments had been graded and course grades assigned. An assignment designed to assist students in making their assumption and inferences processes visible so they could be explored, was implemented with the goal of this exploration then promoting more tolerance for ambiguity, which, as already outlined, promotes critical thought. The assignment offers students two options and then requires them to explore what they have learned about inferences and/or assumptions This presentation outlines the assignment and demonstrates the humanities model, what students learn from particular assignments and how it fosters a change in Tolerance for Ambiguity which, serves as the foundational component of critical thinking.

Keywords: critical thinking, humanities education, sociocentrism, tolerance for ambiguity

Procedia PDF Downloads 269
677 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens

Authors: R. Tamborrino, F. Rinaudo

Abstract:

Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.

Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities

Procedia PDF Downloads 184
676 Applying Napoleoni's 'Shell-State' Concept to Jihadist Organisations's Rise in Mali, Nigeria and Syria/Iraq, 2011-2015

Authors: Francesco Saverio Angiò

Abstract:

The Islamic State of Iraq and the Levant / Syria (ISIL/S), Al-Qaeda in the Islamic Maghreb (AQIM) and People Committed to the Propagation of the Prophet's Teachings and Jihad, also known as ‘Boko Haram’ (BH), have fought successfully against Syria and Iraq, Mali, Nigeria’s government, respectively. According to Napoleoni, the ‘shell-state’ concept can explain the economic dimension and the financing model of the ISIL insurgency. However, she argues that AQIM and BH did not properly plan their financial model. Consequently, her idea would not be suitable to these groups. Nevertheless, AQIM and BH’s economic performances and their (short) territorialisation suggest that their financing models respond to a well-defined strategy, which they were able to adapt to new circumstances. Therefore, Napoleoni’s idea of ‘shell-state’ can be applied to the three jihadist armed groups. In the last five years, together with other similar entities, ISIL/S, AQIM and BH have been fighting against governments with insurgent tactics and terrorism acts, conquering and ruling a quasi-state; a physical space they presented as legitimate territorial entity, thanks to a puritan version of the Islamic law. In these territories, they have exploited the traditional local economic networks. In addition, they have contributed to the development of legal and illegal transnational business activities. They have also established a justice system and created an administrative structure to supply services. Napoleoni’s ‘shell-state’ can describe the evolution of ISIL/S, AQIM and BH, which has switched from an insurgency to a proto or a quasi-state entity, enjoying a significant share of power over territories and populations. Napoleoni first developed and applied the ‘Shell-state’ concept to describe the nature of groups such as the Palestine Liberation Organisation (PLO), before using it to explain the expansion of ISIL. However, her original conceptualisation emphasises on the economic dimension of the rise of the insurgency, focusing on the ‘business’ model and the insurgents’ financing management skills, which permits them to turn into an organisation. However, the idea of groups which use, coordinate and grab some territorial economic activities (at the same time, encouraging new criminal ones), can also be applied to administrative, social, infrastructural, legal and military levels of their insurgency, since they contribute to transform the insurgency to the same extent the economic dimension does. In addition, according to Napoleoni’s view, the ‘shell-state’ prism is valid to understand the ISIL/S phenomenon, because the group has carefully planned their financial steps. Napoleoni affirmed that ISIL/S carries out activities in order to promote their conversion from a group relying on external sponsors to an entity that can penetrate and condition local economies. On the contrary, ‘shell-state’ could not be applied to AQIM or BH, which are acting more like smugglers. Nevertheless, despite its failure to control territories, as ISIL has been able to do, AQIM and BH have responded strategically to their economic circumstances and have defined specific dynamics to ensure a flow of stable funds. Therefore, Napoleoni’s theory is applicable.

Keywords: shell-state, jihadist insurgency, proto or quasi-state entity economic planning, strategic financing

Procedia PDF Downloads 346
675 Tool Wear of Metal Matrix Composite 10wt% AlN Reinforcement Using TiB2 Cutting Tool

Authors: M. S. Said, J. A. Ghani, C. H. Che Hassan, N. N. Wan, M. A. Selamat, R. Othman

Abstract:

Metal Matrix Composite (MMCs) have attracted considerable attention as a result of their ability to provide high strength, high modulus, high toughness, high impact properties, improved wear resistance and good corrosion resistance than unreinforced alloy. Aluminium Silicon (Al/Si) alloys Metal Matrix composite (MMC) has been widely used in various industrial sectors such as transportation, domestic equipment, aerospace, military, construction, etc. Aluminium silicon alloy is MMC reinforced with aluminium nitride (AlN) particle and becomes a new generation material for automotive and aerospace applications. The AlN material is one of the advanced materials with light weight, high strength, high hardness and stiffness qualities which have good future prospects. However, the high degree of ceramic particles reinforcement and the irregular nature of the particles along the matrix material that contribute to its low density, is the main problem that leads to the machining difficulties. This paper examines tool wear when milling AlSi/AlN Metal Matrix Composite using a TiB2 coated carbide cutting tool. The volume of the AlN reinforced particle was 10%. The milling process was carried out under dry cutting condition. The TiB2 coated carbide insert parameters used were the cutting speed of (230 m/min, feed rate 0.4mm tooth, DOC 0.5mm, 300 m/min, feed rate 0.8mm/tooth, DOC 0.5mm and 370 m/min, feed rate 0.8, DOC 0.4m). The Sometech SV-35 video microscope system was used for tool wear measurements respectively. The results have revealed that the tool life increases with the cutting speed (370 m/min, feed rate 0.8 mm/tooth and depth of cut 0.4mm) constituted the optimum condition for longer tool life which is 123.2 min. While at medium cutting speed, it is found that the cutting speed of 300m/min, feed rate 0.8 mm/tooth and depth of cut 0.5mm only 119.86 min for tool wear mean while the low cutting speed give 119.66 min. The high cutting speed gives the best parameter for cutting AlSi/AlN MMCs materials. The result will help manufacture to machining the AlSi/AlN MMCs materials.

Keywords: AlSi/AlN Metal Matrix Composite milling process, tool wear, TiB2 coated carbide tool, manufacturing engineering

Procedia PDF Downloads 421
674 Using GIS for Assessment and Modelling of Oil Spill Risk at Vulnerable Coastal Resources: Of Misratah Coast, Libya

Authors: Abduladim Maitieg

Abstract:

The oil manufacture is one of the main productive activities in Libya and has a massive infrastructure, including offshore drilling and exploration and wide oil export platform sites that located in coastal area. There is a threat to marine and coastal area of oil spills is greatest in those sites with a high spills comes from urban and industry, parallel to that, monitoring oil spills and risk emergency strategy is weakness, An approach for estimating a coastal resources vulnerability to oil spills is presented based on abundance, environmental and Scio-economic importance, distance to oil spill resources and oil risk likelihood. As many as 10 coastal resources were selected for oil spill assessment at the coast. This study aims to evaluate, determine and establish vulnerable coastal resource maps and estimating the rate of oil spill comes for different oil spill resources in Misratah marine environment. In the study area there are two type of oil spill resources, major oil resources come from offshore oil industries which are 96 km from the Coast and Loading/Uploading oil platform. However, the miner oil resources come from urban sewage pipes and fish ports. In order to analyse the collected database, the Geographic information system software has been used to identify oil spill location, to map oil tracks in front of study area, and developing seasonal vulnerable costal resources maps. This work shows that there is a differential distribution of the degree of vulnerability to oil spills along the coastline, with values ranging from high vulnerability and low vulnerability, and highlights the link between oil spill movement and coastal resources vulnerability. The results of assessment found most of costal freshwater spring sites are highly vulnerable to oil spill due to their location on the intertidal zone and their close to proximity to oil spills recourses such as Zreag coast. Furthermore, the Saltmarsh coastline is highly vulnerable to oil spill risk due to characterisation as it contains a nesting area of sea turtles and feeding places for migratory birds and the . Oil will reach the coast in winter season according to oil spill movement. Coastal tourist beaches in the north coast are considered as highly vulnerable to oil spill due to location and closeness to oil spill resources.

Keywords: coastal recourses vulnerability, oil spill trajectory, gnome software, Misratah coast- Libya, GIS

Procedia PDF Downloads 307
673 Human Health Risk Assessment from Metals Present in a Soil Contaminated by Crude Oil

Authors: M. A. Stoian, D. M. Cocarta, A. Badea

Abstract:

The main sources of soil pollution due to petroleum contaminants are industrial processes involve crude oil. Soil polluted with crude oil is toxic for plants, animals, and humans. Human exposure to the contaminated soil occurs through different exposure pathways: Soil ingestion, diet, inhalation, and dermal contact. The present study research is focused on soil contamination with heavy metals as a consequence of soil pollution with petroleum products. Human exposure pathways considered are: Accidentally ingestion of contaminated soil and dermal contact. The purpose of the paper is to identify the human health risk (carcinogenic risk) from soil contaminated with heavy metals. The human exposure and risk were evaluated for five contaminants of concern of the eleven which were identified in soil. Two soil samples were collected from a bioremediation platform from Muntenia Region of Romania. The soil deposited on the bioremediation platform was contaminated through extraction and oil processing. For the research work, two average soil samples from two different plots were analyzed: The first one was slightly contaminated with petroleum products (Total Petroleum Hydrocarbons (TPH) in soil was 1420 mg/kgd.w.), while the second one was highly contaminated (TPH in soil was 24306 mg/kgd.w.). In order to evaluate risks posed by heavy metals due soil pollution with petroleum products, five metals known as carcinogenic were investigated: Arsenic (As), Cadmium (Cd), ChromiumVI (CrVI), Nickel (Ni), and Lead (Pb). Results of the chemical analysis performed on samples collected from the contaminated soil evidence soil contamination with heavy metals as following: As in Site 1 = 6.96 mg/kgd.w; As in Site 2 = 11.62 mg/kgd.w, Cd in Site 1 = 0.9 mg/kgd.w; Cd in Site 2 = 1 mg/kgd.w; CrVI was 0.1 mg/kgd.w for both sites; Ni in Site 1 = 37.00 mg/kgd.w; Ni in Site 2 = 42.46 mg/kgd.w; Pb in Site 1 = 34.67 mg/kgd.w; Pb in Site 2 = 120.44 mg/kgd.w. The concentrations for these metals exceed the normal values established in the Romanian regulation, but are smaller than the alert level for a less sensitive use of soil (industrial). Although, the concentrations do not exceed the thresholds, the next step was to assess the human health risk posed by soil contamination with these heavy metals. Results for risk were compared with the acceptable one (10-6, according to World Human Organization). As, expected, the highest risk was identified for the soil with a higher degree of contamination: Individual Risk (IR) was 1.11×10-5 compared with 8.61×10-6

Keywords: carcinogenic risk, heavy metals, human health risk assessment, soil pollution

Procedia PDF Downloads 419
672 Using Arts in ESL Classroom

Authors: Nazia Shehzad

Abstract:

Language and art can supplement and correlate each other. Through the ages art has been a means of visual expression used to convey a wide series of incarnated ideas. Art can take the perceiver into different times and into different worlds. It can also be used to introduce different levels of vocabulary to the learners of a second language. Learning a second language for most students is a very difficult and strenuous experience. They are not only trying to accommodate to a new language but are also trying to adjust to themselves and a new environment. They are anxious about almost everything, but they are especially self-conscious about their performance in the classroom. By relocating the focus from the student to an object, everyone participates, thus waiving a certain degree of self-consciousness. The experience, a student has with art in the classroom has to be gratifying for both the student and the teacher. If the atmosphere in the classroom is too grave it will not serve any useful purpose. Art is an excellent way to teach English and encourage collaboration and interaction between students of all ages. As making art involves many different processes, it is wonderful for classification and following/giving instructions. It is also an effective way to achieve and implement language of characterization and comparison and vocabulary acquirement for the elements of design (shape, size, color, texture, tone etc.) is so much more entertaining if done in a practical and hands-on way. Expressing ideas and feelings through art is also of immeasurable value where students are at the beginning stages of English language acquisition and for many of my Saudi students it was a form of therapy. It is also a way to respect, search, examine and share the cultural traditions of different cultures, and of the students themselves. Art not only provides a field for ideas to keep aimless, meandering minds of students' busy but is also a productive tool to analyze English language in a new order. As an ESL teacher, using art is a highly compelling way to bridge the gap between student and teacher. It’s difficult to keep students concentrated, especially when they speak a different language. To get students to actually learn and explore something in your foreign language lesson, artwork is your best friend. Many teachers feel that through amalgamation of the arts into their academic lessons students are able to learn more profoundly because they use diverse ways of thinking and problem solving. Teachers observe that drawing often retains students who might otherwise be dispassionate and can help students move ahead simple recall when they are asked to make connections and come up with an exclusive interpretation through an artwork or drawing. Students use observation skills when they are drawing, and this can help to persuade students who might otherwise remain silent or need more time to process information.

Keywords: amalgamation of arts, expressing ideas and feelings through arts, effective way to achieve and implement language, language and art can supplement and correlate each other

Procedia PDF Downloads 355
671 Methodical Approach for the Integration of a Digital Factory Twin into the Industry 4.0 Processes

Authors: R. Hellmuth

Abstract:

The orientation of flexibility and adaptability with regard to factory planning is at machine and process level. Factory buildings are not the focus of current research. Factory planning has the task of designing products, plants, processes, organization, areas and the construction of a factory. The adaptability of a factory can be divided into three types: spatial, organizational and technical adaptability. Spatial adaptability indicates the ability to expand and reduce the size of a factory. Here, the area-related breathing capacity plays the essential role. It mainly concerns the factory site, the plant layout and the production layout. The organizational ability to change enables the change and adaptation of organizational structures and processes. This includes structural and process organization as well as logistical processes and principles. New and reconfigurable operating resources, processes and factory buildings are referred to as technical adaptability. These three types of adaptability can be regarded independently of each other as undirected potentials of different characteristics. If there is a need for change, the types of changeability in the change process are combined to form a directed, complementary variable that makes change possible. When planning adaptability, importance must be attached to a balance between the types of adaptability. The vision of the intelligent factory building and the 'Internet of Things' presupposes the comprehensive digitalization of the spatial and technical environment. Through connectivity, the factory building must be empowered to support a company's value creation process by providing media such as light, electricity, heat, refrigeration, etc. In the future, communication with the surrounding factory building will take place on a digital or automated basis. In the area of industry 4.0, the function of the building envelope belongs to secondary or even tertiary processes, but these processes must also be included in the communication cycle. An integrative view of a continuous communication of primary, secondary and tertiary processes is currently not yet available and is being developed with the aid of methods in this research work. A comparison of the digital twin from the point of view of production and the factory building will be developed. Subsequently, a tool will be elaborated to classify digital twins from the perspective of data, degree of visualization, and the trades. Thus a contribution is made to better integrate the secondary and tertiary processes in a factory into the added value.

Keywords: adaptability, digital factory twin, factory planning, industry 4.0

Procedia PDF Downloads 148
670 Mycophenolate-Induced Disseminated TB in a PPD-Negative Patient

Authors: Megan L. Srinivas

Abstract:

Individuals with underlying rheumatologic diseases such as dermatomyositis may not adequately respond to tuberculin (PPD) skin tests, creating false negative results. These illnesses are frequently treated with immunosuppressive therapy making proper identification of TB infection imperative. A 59-year-old Filipino man was diagnosed with dermatomyositis on the basis of rash, electromyography, and muscle biopsy. He was initially treated with IVIG infusions and transitioned to oral prednisone and mycophenolate. The patient’s symptoms improved on this regimen. Six months after starting mycophenolate, the patient began having fevers, night sweats, and productive cough without hemoptysis. He moved from the Philippines 5 years prior to dermatomyositis diagnosis, denied sick contacts, and was PPD negative both at immigration and immediately prior to starting mycophenolate treatment. A third PPD was negative following the onset of these new symptoms. He was treated for community-acquired pneumonia, but symptoms worsened over 10 days and he developed watery diarrhea and a growing non-tender, non-mobile mass on the left side of his neck. A chest x-ray demonstrated a cavitary lesion in right upper lobe suspicious for TB that had not been present one month earlier. Chest CT corroborated this finding also exhibiting necrotic hilar and paratracheal lymphadenopathy. Neck CT demonstrated the left-sided mass as cervical chain lymphadenopathy. Expectorated sputum and stool samples contained acid-fast bacilli (AFB), cultures showing TB bacteria. Fine-needle biopsy of the neck mass (scrofula) also exhibited AFB. An MRI brain showed nodular enhancement suspected to be a tuberculoma. Mycophenolate was discontinued and dermatomyositis treatment was switched to oral prednisone with a 3-day course of IVIG. The patient’s infection showed sensitivity to standard RIPE (rifampin, isoniazid, pyrazinamide, and ethambutol) treatment. Within a week of starting RIPE, the patient’s diarrhea subsided, scrofula diminished, and symptoms significantly improved. By the end of treatment week 3, the patient’s sputum no longer contained AFB; he was removed from isolation, and was discharged to continue RIPE at home. He was discharged on oral prednisone, which effectively addressed his dermatomyositis. This case illustrates the unreliability of PPD tests in patients with long-term inflammatory diseases such as dermatomyositis. Other immunosuppressive therapies (adalimumab, etanercept, and infliximab) have been affiliated with conversion of latent TB to disseminated TB. Mycophenolate is another immunosuppressive agent with similar mechanistic properties. Thus, it is imperative that patients with long-term inflammatory diseases and high-risk TB factors initiating immunosuppressive therapy receive a TB blood test (such as a quantiferon gold assay) prior to the initiation of therapy to ensure that latent TB is unmasked before it can evolve into a disseminated form of the disease.

Keywords: dermatomyositis, immunosuppressant medications, mycophenolate, disseminated tuberculosis

Procedia PDF Downloads 202
669 Family Medicine Residents in End-of-Life Care

Authors: Goldie Lynn Diaz, Ma. Teresa Tricia G. Bautista, Elisabeth Engeljakob, Mary Glaze Rosal

Abstract:

Introduction: Residents are expected to convey unfavorable news, discuss prognoses, and relieve suffering, and address do-not-resuscitate orders, yet some report a lack of competence in providing this type of care. Recognizing this need, Family Medicine residency programs are incorporating end-of-life care from symptom and pain control, counseling, and humanistic qualities as core proficiencies in training. Objective: This study determined the competency of Family Medicine Residents from various institutions in Metro Manila on rendering care for the dying. Materials and Methods: Trainees completed a Palliative Care Evaluation tool to assess their degree of confidence in patient and family interactions, patient management, and attitudes towards hospice care. Results: Remarkably, only a small fraction of participants were confident in performing independent management of terminal delirium and dyspnea. Fewer than 30% of residents can do the following without supervision: discuss medication effects and patient wishes after death, coping with pain, vomiting and constipation, and reacting to limited patient decision-making capacity. Half of the respondents had confidence in supporting the patient or family member when they become upset. Majority expressed confidence in many end-of-life care skills if supervision, coaching and consultation will be provided. Most trainees believed that pain medication should be given as needed to terminally ill patients. There was also uncertainty as to the most appropriate person to make end-of-life decisions. These attitudes may be influenced by personal beliefs rooted in cultural upbringing as well as by personal experiences with death in the family, which may also affect their participation and confidence in caring for the dying. Conclusion: Enhancing the quality and quantity of end-of-life care experiences during residency with sufficient supervision and role modeling may lead to knowledge and skill improvement to ensure quality of care. Fostering bedside learning opportunities during residency is an appropriate venue for teaching interventions in end-of-life care education.

Keywords: end of life care, geriatrics, palliative care, residency training skill

Procedia PDF Downloads 252
668 Dynamic Network Approach to Air Traffic Management

Authors: Catia S. A. Sima, K. Bousson

Abstract:

Congestion in the Terminal Maneuvering Areas (TMAs) of larger airports impacts all aspects of air traffic flow, not only at national level but may also induce arrival delays at international level. Hence, there is a need to monitor appropriately the air traffic flow in TMAs so that efficient decisions may be taken to manage their occupancy rates. It would be desirable to physically increase the existing airspace to accommodate all existing demands, but this question is entirely utopian and, given this possibility, several studies and analyses have been developed over the past decades to meet the challenges that have arisen due to the dizzying expansion of the aeronautical industry. The main objective of the present paper is to propose concepts to manage and reduce the degree of uncertainty in the air traffic operations, maximizing the interest of all involved, ensuring a balance between demand and supply, and developing and/or adapting resources that enable a rapid and effective adaptation of measures to the current context and the consequent changes perceived in the aeronautical industry. A central task is to emphasize the increase in air traffic flow management capacity to the present day, taking into account not only a wide range of methodologies but also equipment and/or tools already available in the aeronautical industry. The efficient use of these resources is crucial as the human capacity for work is limited and the actors involved in all processes related to air traffic flow management are increasingly overloaded and, as a result, operational safety could be compromised. The methodology used to answer and/or develop the issues listed above is based on the advantages promoted by the application of Markov Chain principles that enable the construction of a simplified model of a dynamic network that describes the air traffic flow behavior anticipating their changes and eventual measures that could better address the impact of increased demand. Through this model, the proposed concepts are shown to have potentials to optimize the air traffic flow management combined with the operation of the existing resources at each moment and the circumstances found in each TMA, using historical data from the air traffic operations and specificities found in the aeronautical industry, namely in the Portuguese context.

Keywords: air traffic flow, terminal maneuvering area, TMA, air traffic management, ATM, Markov chains

Procedia PDF Downloads 127
667 A Focused, High-Intensity Spread-Spectrum Ultrasound Solution to Prevent Biofouling

Authors: Alan T. Sassler

Abstract:

Biofouling is a significant issue for ships, especially those based in warm water ports. Biofouling damages hull coatings, degrades platform hydrodynamics, blocks cooling water intakes, and returns, reduces platform range and speed, and increases fuel consumption. Although platforms are protected to some degree by antifouling paints, these paints are much less effective on stationary platforms, and problematic biofouling can occur on antifouling paint-protected stationary platforms in some environments in as little as a matter of weeks. Remediation hull cleaning operations are possible, but they are very expensive, sometimes result in damage to the vessel’s paint or hull and are generally not completely effective. Ultrasound with sufficient intensity focused on specific frequency ranges can be used to prevent the growth of biofouling organisms. The use of ultrasound to prevent biofouling isn't new, but systems to date have focused on protecting platforms by shaking the hull using internally mounted transducers similar to those used in ultrasonic cleaning machines. While potentially effective, this methodology doesn't scale well to large platforms, and there are significant costs associated with installing and maintaining these systems, which dwarf the initial purchase price. An alternative approach has been developed, which uses highly directional pier-mounted transducers to project high-intensity spread-spectrum ultrasonic energy into the water column focused near the surface. This focused energy has been shown to prevent biofouling at ranges of up to 50 meters from the source. Spreading the energy out over a multi-kilohertz band makes the system both more effective and more environmentally friendly. This system has been shown to be both effective and inexpensive in small-scale testing and is now being characterized on a larger scale in selected marinas. To date, test results have been collected in Florida marinas suggesting that this approach can be used to keep ensonified areas of thousands of square meters free from biofouling, although care must be taken to minimize shaded areas.

Keywords: biofouling, ultrasonic, environmentally friendly antifoulant, marine protection, antifouling

Procedia PDF Downloads 52
666 Comparison of Propofol versus Ketamine-Propofol Combination as an Anesthetic Agent in Supratentorial Tumors: A Randomized Controlled Study

Authors: Jakkireddy Sravani

Abstract:

Introduction: The maintenance of hemodynamic stability is of pivotal importance in supratentorial surgeries. Anesthesia for supratentorial tumors requires an understanding of localized or generalized rising ICP, regulation, and maintenance of intracerebral perfusion, and avoidance of secondary systemic ischemic insults. We aimed to compare the effects of the combination of ketamine and propofol with propofol alone when used as an induction and maintenance anesthetic agent during supratentorial tumors. Methodology: This prospective, randomized, double-blinded controlled study was conducted at AIIMS Raipur after obtaining the institute Ethics Committee approval (1212/IEC-AIIMSRPR/2022 dated 15/10/2022), CTRI/2023/01/049298 registration and written informed consent. Fifty-two supratentorial tumor patients posted for craniotomy and excision were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol, and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability and quality of brain relaxation were studied in both groups. Statistical analysis and technique: An MS Excel spreadsheet program was used to code and record the data. Data analysis was done using IBM Corp SPSS v23. The independent sample "t" test was applied for continuously dispersed data when two groups were compared, the chi-square test for categorical data, and the Wilcoxon test for not normally distributed data. Results: The patients were comparable in terms of demographic profile, duration of the surgery, and intraoperative input-output status. The trends in BIS over time were similar between the two groups (p-value = 1.00). Intraoperative hemodynamics (SBP, DBP, MAP) were better maintained in the ketamine and propofol combination group during induction and maintenance (p-value < 0.01). The quality of brain relaxation was comparable between the two groups (p-value = 0.364). Conclusion: Ketamine and propofol combination for the induction and maintenance of anesthesia was associated with superior hemodynamic stability, required fewer vasopressors during excision of supratentorial tumors, provided adequate brain relaxation, and some degree of neuroprotection compared to propofol alone.

Keywords: supratentorial tumors, hemodynamic stability, brain relaxation, ketamine, propofol

Procedia PDF Downloads 12
665 Controlling RPV Embrittlement through Wet Annealing in Support of Life Extension

Authors: E. A. Krasikov

Abstract:

As a main barrier against radioactivity outlet reactor pressure vessel (RPV) is a key component in terms of NPP safety. Therefore, present-day demands in RPV reliability enhance have to be met by all possible actions for RPV in-service embrittlement mitigation. Annealing treatment is known to be the effective measure to restore the RPV metal properties deteriorated by neutron irradiation. There are two approaches to annealing. The first one is so-called ‘dry’ high temperature (~475°C) annealing. It allows obtaining practically complete recovery, but requires the removal of the reactor core and internals. External heat source (furnace) is required to carry out RPV heat treatment. The alternative approach is to anneal RPV at a maximum coolant temperature which can be obtained using the reactor core or primary circuit pumps while operating within the RPV design limits. This low temperature «wet» annealing, although it cannot be expected to produce complete recovery, is more attractive from the practical point of view especially in cases when the removal of the internals is impossible. The first RPV «wet» annealing was done using nuclear heat (US Army SM-1A reactor). The second one was done by means of primary pumps heat (Belgian BR-3 reactor). As a rule, there is no recovery effect up to annealing and irradiation temperature difference of 70°C. It is known, however, that along with radiation embrittlement neutron irradiation may mitigate the radiation damage in metals. Therefore, we have tried to test the possibility to use the effect of radiation-induced ductilization in ‘wet’ annealing technology by means of nuclear heat utilization as heat and neutron irradiation sources at once. In support of the above-mentioned conception the 3-year duration reactor experiment on 15Cr3NiMoV type steel with preliminary irradiation at operating PWR at 270°C and following extra irradiation (87 h at 330°C) at IR-8 test reactor was fulfilled. In fact, embrittlement was partly suppressed up to value equivalent to 1,5 fold neutron fluence decrease. The degree of recovery in case of radiation enhanced annealing is equal to 27% whereas furnace annealing results in zero effect under existing conditions. Mechanism of the radiation-induced damage mitigation is proposed. It is hoped that «wet » annealing technology will help provide a better management of the RPV degradation as a factor affecting the lifetime of nuclear power plants which, together with associated management methods, will help facilitate safe and economic long-term operation of PWRs.

Keywords: controlling, embrittlement, radiation, steel, wet annealing

Procedia PDF Downloads 372
664 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death

Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh

Abstract:

All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.

Keywords: intrauterine death, maceration, postmortem MRI, stillborn

Procedia PDF Downloads 120
663 The Evaluation of Complete Blood Cell Count-Based Inflammatory Markers in Pediatric Obesity and Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is defined as a severe chronic disease characterized by a low-grade inflammatory state. Therefore, inflammatory markers gained utmost importance during the evaluation of obesity and metabolic syndrome (MetS), a disease characterized by central obesity, elevated blood pressure, increased fasting blood glucose and elevated triglycerides or reduced high density lipoprotein cholesterol (HDL-C) values. Some inflammatory markers based upon complete blood cell count (CBC) are available. In this study, it was questioned which inflammatory marker was the best to evaluate the differences between various obesity groups. 514 pediatric individuals were recruited. 132 children with MetS, 155 morbid obese (MO), 90 obese (OB), 38 overweight (OW) and 99 children with normal BMI (N-BMI) were included into the scope of this study. Obesity groups were constituted using age- and sex-dependent body mass index (BMI) percentiles tabulated by World Health Organization. MetS components were determined to be able to specify children with MetS. CBC were determined using automated hematology analyzer. HDL-C analysis was performed. Using CBC parameters and HDL-C values, ratio markers of inflammation, which cover neutrophil-to-lymphocyte ratio (NLR), derived neutrophil-to-lymphocyte ratio (dNLR), platelet-to-lymphocyte ratio (PLR), lymphocyte-to-monocyte ratio (LMR), monocyte-to-HDL-C ratio (MHR) were calculated. Statistical analyses were performed. The statistical significance degree was considered as p < 0.05. There was no statistically significant difference among the groups in terms of platelet count, neutrophil count, lymphocyte count, monocyte count, and NLR. PLR differed significantly between OW and N-BMI as well as MetS. Monocyte-to HDL-C value exhibited statistical significance between MetS and N-BMI, OB, and MO groups. HDL-C value differed between MetS and N-BMI, OW, OB, MO groups. MHR was the ratio, which exhibits the best performance among the other CBC-based inflammatory markers. On the other hand, when MHR was compared to HDL-C only, it was suggested that HDL-C has given much more valuable information. Therefore, this parameter still keeps its value from the diagnostic point of view. Our results suggest that MHR can be an inflammatory marker during the evaluation of pediatric MetS, but the predictive value of this parameter was not superior to HDL-C during the evaluation of obesity.

Keywords: children, complete blood cell count, high density lipoprotein cholesterol, metabolic syndrome, obesity

Procedia PDF Downloads 122
662 Preparation of Biodegradable Methacrylic Nanoparticles by Semicontinuous Heterophase Polymerization for Drugs Loading: The Case of Acetylsalicylic Acid

Authors: J. Roberto Lopez, Hened Saade, Graciela Morales, Javier Enriquez, Raul G. Lopez

Abstract:

Implementation of systems based on nanostructures for drug delivery applications have taken relevance in recent studies focused on biomedical applications. Although there are several nanostructures as drugs carriers, the use of polymeric nanoparticles (PNP) has been widely studied for this purpose, however, the main issue for these nanostructures is the size control below 50 nm with a narrow distribution size, due to they must go through different physiological barriers and avoid to be filtered by kidneys (< 10 nm) or the spleen (> 100 nm). Thus, considering these and other factors, it can be mentioned that drug-loaded nanostructures with sizes varying between 10 and 50 nm are preferred in the development and study of PNP/drugs systems. In this sense, the Semicontinuous Heterophase Polymerization (SHP) offers the possibility to obtain PNP in the desired size range. Considering the above explained, methacrylic copolymer nanoparticles were obtained under SHP. The reactions were carried out in a jacketed glass reactor with the required quantities of water, ammonium persulfate as initiator, sodium dodecyl sulfate/sodium dioctyl sulfosuccinate as surfactants, methyl methacrylate and methacrylic acid as monomers with molar ratio of 2/1, respectively. The monomer solution was dosed dropwise during reaction at 70 °C with a mechanical stirring of 650 rpm. Nanoparticles of poly(methyl methacrylate-co-methacrylic acid) were loaded with acetylsalicylic acid (ASA, aspirin) by a chemical adsorption technique. The purified latex was put in contact with a solution of ASA in dichloromethane (DCM) at 0.1, 0.2, 0.4 or 0.6 wt-%, at 35°C during 12 hours. According to the boiling point of DCM, as well as DCM and water densities, the loading process is completed when the whole DCM is evaporated. The hydrodynamic diameter was measured after polymerization by quasi-elastic light scattering and transmission electron microscopy, before and after loading procedures with ASA. The quantitative and qualitative analyses of PNP loaded with ASA were measured by infrared spectroscopy, differential scattering calorimetry and thermogravimetric analysis. Also, the molar mass distributions of polymers were determined in a gel permeation chromatograph apparatus. The load capacity and efficiency were determined by gravimetric analysis. The hydrodynamic diameter results for methacrylic PNP without ASA showed a narrow distribution with an average particle size around 10 nm and a composition methyl methacrylate/methacrylic acid molar ratio equal to 2/1, same composition of Eudragit S100, which is a commercial compound widely used as excipient. Moreover, the latex was stabilized in a relative high solids content (around 11 %), a monomer conversion almost 95 % and a number molecular weight around 400 Kg/mol. The average particle size in the PNP/aspirin systems fluctuated between 18 and 24 nm depending on the initial percentage of aspirin in the loading process, being the drug content as high as 24 % with an efficiency loading of 36 %. These average sizes results have not been reported in the literature, thus, the methacrylic nanoparticles here reported are capable to be loaded with a considerable amount of ASA and be used as a drug carrier.

Keywords: aspirin, biocompatibility, biodegradable, Eudragit S100, methacrylic nanoparticles

Procedia PDF Downloads 132
661 Computed Tomography Myocardial Perfusion on a Patient with Hypertrophic Cardiomyopathy

Authors: Jitendra Pratap, Daphne Prybyszcuk, Luke Elliott, Arnold Ng

Abstract:

Introduction: Coronary CT angiography is a non-invasive imaging technique for the assessment of coronary artery disease and has high sensitivity and negative predictive value. However, the correlation between the degree of CT coronary stenosis and the significance of hemodynamic obstruction is poor. The assessment of myocardial perfusion has mostly been undertaken by Nuclear Medicine (SPECT), but it is now possible to perform stress myocardial CT perfusion (CTP) scans quickly and effectively using CT scanners with high temporal resolution. Myocardial CTP is in many ways similar to neuro perfusion imaging technique, where radiopaque iodinated contrast is injected intravenously, transits the pulmonary and cardiac structures, and then perfuses through the coronary arteries into the myocardium. On the Siemens Force CT scanner, a myocardial perfusion scan is performed using a dynamic axial acquisition, where the scanner shuffles in and out every 1-3 seconds (heart rate dependent) to be able to cover the heart in the z plane. This is usually performed over 38 seconds. Report: A CT myocardial perfusion scan can be utilised to complement the findings of a CT Coronary Angiogram. Implementing a CT Myocardial Perfusion study as part of a routine CT Coronary Angiogram procedure provides a ‘One Stop Shop’ for diagnosis of coronary artery disease. This case study demonstrates that although the CT Coronary Angiogram was within normal limits, the perfusion scan provided additional, clinically significant information in regards to the haemodynamics within the myocardium of a patient with Hypertrophic Obstructive Cardio Myopathy (HOCM). This negated the need for further diagnostics studies such as cardiac ECHO or Nuclear Medicine Stress tests. Conclusion: CT coronary angiography with adenosine stress myocardial CTP was utilised in this case to specifically exclude coronary artery disease in conjunction with accessing perfusion within the hypertrophic myocardium. Adenosine stress myocardial CTP demonstrated the reduced myocardial blood flow within the hypertrophic myocardium, but the coronary arteries did not show any obstructive disease. A CT coronary angiogram scan protocol that incorporates myocardial perfusion can provide diagnostic information on the haemodynamic significance of any coronary artery stenosis and has the potential to be a “One Stop Shop” for cardiac imaging.

Keywords: CT, cardiac, myocardium, perfusion

Procedia PDF Downloads 119