Search results for: shape error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3974

Search results for: shape error

314 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture

Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz

Abstract:

Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.

Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV

Procedia PDF Downloads 79
313 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 230
312 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 129
311 A Numerical Studies for Improving the Performance of Vertical Axis Wind Turbine by a Wind Power Tower

Authors: Soo-Yong Cho, Chong-Hyun Cho, Chae-Whan Rim, Sang-Kyu Choi, Jin-Gyun Kim, Ju-Seok Nam

Abstract:

Recently, vertical axis wind turbines (VAWT) have been widely used to produce electricity even in urban. They have several merits such as low sound noise, easy installation of the generator and simple structure without yaw-control mechanism and so on. However, their blades are operated under the influence of the trailing vortices generated by the preceding blades. This phenomenon deteriorates its output power and makes difficulty predicting correctly its performance. In order to improve the performance of VAWT, wind power towers can be applied. Usually, the wind power tower can be constructed as a multi-story building to increase the frontal area of the wind stream. Hence, multiple sets of the VAWT can be installed within the wind power tower, and they can be operated at high elevation. Many different types of wind power tower can be used in the field. In this study, a wind power tower with circular column shape was applied, and the VAWT was installed at the center of the wind power tower. Seven guide walls were used as a strut between the floors of the wind power tower. These guide walls were utilized not only to increase the wind velocity within the wind power tower but also to adjust the wind direction for making a better working condition on the VAWT. Hence, some important design variables, such as the distance between the wind turbine and the guide wall, the outer diameter of the wind power tower, the direction of the guide wall against the wind direction, should be considered to enhance the output power on the VAWT. A numerical analysis was conducted to find the optimum dimension on design variables by using the computational fluid dynamics (CFD) among many prediction methods. The CFD could be an accurate prediction method compared with the stream-tube methods. In order to obtain the accurate results in the CFD, it needs the transient analysis and the full three-dimensional (3-D) computation. However, this full 3-D CFD could be hard to be a practical tool because it requires huge computation time. Therefore, the reduced computational domain is applied as a practical method. In this study, the computations were conducted in the reduced computational domain and they were compared with the experimental results in the literature. It was examined the mechanism of the difference between the experimental results and the computational results. The computed results showed this computational method could be an effective method in the design methodology using the optimization algorithm. After validation of the numerical method, the CFD on the wind power tower was conducted with the important design variables affecting the performance of VAWT. The results showed that the output power of the VAWT obtained using the wind power tower was increased compared to them obtained without the wind power tower. In addition, they showed that the increased output power on the wind turbine depended greatly on the dimension of the guide wall.

Keywords: CFD, performance, VAWT, wind power tower

Procedia PDF Downloads 360
310 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing

Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares

Abstract:

In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.

Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms

Procedia PDF Downloads 166
309 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 60
308 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 157
307 Petrology of the Post-Collisional Dolerites, Basalts from the Javakheti Highland, South Georgia

Authors: Bezhan Tutberidze

Abstract:

The Neogene-Quaternary volcanic rocks of the Javakheti Highland are products of post-collisional continental magmatism and are related to divergent and convergent margins of Eurasian-Afroarabian lithospheric plates. The studied area constitutes an integral part of the volcanic province of Central South Georgia. Three cycles of volcanic activity are identified here: 1. Late Miocene-Early Pliocene, 2. Late Pliocene-Early /Middle/ Pleistocene and 3. Late Pleistocene. An intense basic dolerite magmatic activity occurred within the time span of the Late Pliocene and lasted until at least Late /Middle/ Pleistocene. The age of the volcanogenic and volcanogenic-sedimentary formation was dated by geomorphological, paleomagnetic, paleontological and geochronological methods /1.7-1.9 Ma/. The volcanic area of the Javakheti Highland contains multiple dolerite Plateaus: Akhalkalaki, Gomarethi, Dmanisi, and Tsalka. Petrographic observations of these doleritic rocks reveal fairly constant mineralogical composition: olivine / Fo₈₇.₆₋₈₂.₇ /, plagioclase / Ab₂₂.₈ An₇₅.₉ Or₁.₃; Ab₄₅.₀₋₃₂.₃ An₅₂.₉₋₆₂.₃ Or₂.₁₋₅.₄/. The pyroxene is an augite and may exhibit a visible zoning: / Wo 39.7-43.1 En 43.5-45.2 Fs 16.8-11.7/. Opaque minerals /magnetite, titanomagnetite/ is abundant as inclusions within olivine and pyroxene crystals. The texture of dolerites exhibits intergranular, holocrystalline to ophitic to sub ophitic granular. Dolerites are most common vesicular rocks. Vesicles range in shape from spherical to elongated and in size from 0.5 mm to than 1.5-2 cm and makeup about 20-50 % of the volume. The dolerites have been subjected to considerable alteration. The secondary minerals in the geothermal field are: zeolite, calcite, chlorite, aragonite, clay-like mineral /dominated by smectites/ and iddingsite –like mineral; rare quartz and pumpellyite are present. These vesicles are filled by secondary minerals. In the chemistry, dolerites are the calc-alkalic transition to sub-alkaline with a predominance of Na₂O over K₂O. Chemical analyses indicate that dolerites of all plateaus of the Javakheti Highland have similar geochemical compositions, signifying that they were formed from the same magmatic source by crystallization of olivine basalis magma which less differentiated / ⁸⁷Sr \ ⁸⁶Sr 0.703920-0704195/. There is one argument, which is less convincing, according to which the dolerites/basalts of the Javakheti Highland are considered to be an activity of a mantle plume. Unfortunately, there does not exist reliable evidence to prove this. The petrochemical peculiarities and eruption nature of the dolerites of the Javakheti Plateau point against their plume origin. Nevertheless, it is not excluded that they influence the formation of dolerite producing primary basaltic magma.

Keywords: calc-alkalic, dolerite, Georgia, Javakheti Highland

Procedia PDF Downloads 241
306 A Culture-Contrastive Analysis Of The Communication Between Discourse Participants In European Editorials

Authors: Melanie Kerschner

Abstract:

Language is our main means of social interaction. News journalism, especially opinion discourse, holds a powerful position in this context. Editorials can be regarded as encounters of different, partially contradictory relationships between discourse participants constructed through the editorial voice. Their primary goal is to shape public opinion by commenting on events already addressed by other journalistic genres in the given newspaper. In doing so, the author tries to establish a consensus over the negotiated matter (i.e. the news event) with the reader. At the same time, he/she claims authority over the “correct” description and evaluation of an event. Yet, how can the relationship and the interaction between the discourse participants, i.e. the journalist, the reader and the news actors represented in the editorial, be best visualized and studied from a cross-cultural perspective? The present research project attempts to give insights into the role of (media) culture in British, Italian and German editorials. For this purpose the presenter will propose a basic framework: the so called “pyramid of discourse participants”, comprising the author, the reader, two types of news actors and the semantic macro-structure (as meta-level of analysis). Based on this framework, the following questions will be addressed: • Which strategies does the author employ to persuade the reader and to prompt him to give his opinion (in the comment section)? • In which ways (and with which linguistic tools) is editorial opinion expressed? • Does the author use adjectives, adverbials and modal verbs to evaluate news actors, their actions and the current state of affairs or does he/she prefer nominal labels? • Which influence do language choice and the related media culture have on the representation of news events in editorials? • In how far does the social context of a given media culture influence the amount of criticism and the way it is mediated so that it is still culturally-acceptable? The following culture-contrastive study shall examine 45 editorials (i.e. 15 per media culture) from six national quality papers that are similar in distribution, importance and the kind of envisaged readership to make valuable conclusions about culturally-motivated similarities and differences in the coverage and assessment of news events. The thematic orientation of the editorials will be the NSA scandal and the reactions of various countries, as this topic was and still is relevant to each of the three media cultures. Starting out from the “pyramid of discourse participants” as underlying framework, eight different criteria will be assigned to the individual discourse participants in the micro-analysis of the editorials. For the purpose of illustration, a single criterion, referring to the salience of authorial opinion, will be selected to demonstrate how the pyramid of discourse participants can be applied as a basis for empirical analysis. Extracts from the corpus shall furthermore enhance the understanding.

Keywords: Micro-analysis of editorials, culture-contrastive research, media culture, interaction between discourse participants, evaluation

Procedia PDF Downloads 482
305 From Mimetic to Mnemonic: On the Simultaneous Rise of Language and Religion

Authors: Dmitry Usenco

Abstract:

The greatest paradox about the origin of language is the fact that, while language is always taught by adults to children, it can never be learnt properly unless its acquisition occurs during childhood. The question that naturally arises in that respect is as follows: How could language be taught for the first time by a non-speaker, i.e., by someone who did not have the opportunity to master it as a child? Yet the above paradox will appear less unresolvable if we hypothesise that language was originally introduced not as a means of communication but as a relatively modest training/playing technique that was used to develop the learners’ mimetic skills. Its communicative and expressive properties could have been discovered and exploited later – upon the learners’ reaching their adulthood. The importance of mimesis in children’s development is universally recognised. The most common forms of it are onomatopoeia and mime, which consist in reproducing sounds and imitating shapes/movements of externally observed objects. However, in some cases, neither of these exercises can be adequate to the task. An object, especially an inanimate one, may emit no characteristic sounds, making onomatopoeia problematic. In other cases, it may have no easily reproduceable shape, while its movements may depend on the specific way of our interacting with it. On such occasions, onomatopoeia and mime can perhaps be supplemented, or even replaced, by movements of the tongue which can metonymically represent certain aspects of our interaction with the object. This is especially evident with consonants: e.g., a fricative sound can designate the subject’s relatively slow approach to the object or vice versa, while a plosive one can express the relatively abrupt process of grabbing/sticking or parrying/bouncing. From that point of view, a protoword can be regarded as a sophisticated gesture of the tongue but also as a mnemonic sequence that contains encoded instructions about the way to handle the object. When this originally subjective link between the object and its mimetic/mnemonic representation eventually installs itself in the collective mind (however small at first the community might be), the initially nameless object acquires a name, and the first word is created. (Discussing the difference between proper and common names is out of the scope of this paper). In its very beginning, this word has two major applications. It can be used for interhuman communication because it allows us to invoke the presence of a currently absent object. It can also be used for designing, expressing, and memorising our interaction with the object itself. The first usage gives rise to language, the second to religion. By the act of naming, we attach to the object a mental (‘spiritual’) dimension which has an independent existence in our collective mind. By referring to the name (idea/demon/soul) of the object, we perform our first act of spirituality, our first religious observance. This is the beginning of animism – arguably, the most ancient form of religion. To conclude: the rise of religion is simultaneous with the the emergence of language in human evolution.

Keywords: language, religion, origin, acquisition, childhood, adulthood, play, represntation, onomatopoeia, mime, gesture, consonant, simultaneity, spirituality, animism

Procedia PDF Downloads 52
304 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger

Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez

Abstract:

One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.

Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation

Procedia PDF Downloads 122
303 Teen Insights into Drugs, Alcohol, and Nicotine: A National Survey of Adolescent Attitudes toward Addictive Substances

Authors: Linda Richter

Abstract:

Background and Significance: The influence of parents on their children’s attitudes and behaviors is immense, even as children grow out of what one might assume to be their most impressionable years and into teenagers. This study specifically examines the potential that parents have to prevent or reduce the risk of adolescent substance use, even in the face of considerable environmental influences to use nicotine, alcohol, or drugs. Methodology: The findings presented are based on a nationally representative survey of 1,014 teens aged 12-17 living in the United States. Data were collected using an online platform in early 2018. About half the sample was female (51%), 49% was aged 12-14, and 51% was aged 15-17. The margin of error was +/- 3.5%. Demographic data on the teens and their families were available through the survey platform. Survey items explored adolescent respondents’ exposure to addictive substances; the extent to which their sources of information about these substances are reliable or credible; friends’ and peers’ substance use; their own intentions to try substances in the future; and their relationship with their parents. Key Findings: Exposure to nicotine, alcohol, or other drugs and misinformation about these substances were associated with a greater likelihood that adolescents have friends who use drugs and that they have intentions to try substances in the future, which are known to directly predict actual teen substance use. In addition, teens who reported a positive relationship with their parents and having parents who are involved in their lives had a lower likelihood of having friends who use drugs and of having intentions to try substances in the future. This relationship appears to be mediated by parents’ ability to reduce the extent to which their children are exposed to substances in their environment and to misinformation about them. Indeed, the findings indicated that teens who reported a good relationship with their parents and those who reported higher levels of parental monitoring had significantly higher odds of reporting a lower number of risk factors than teens with a less positive relationship with parents or less monitoring. There also were significantly greater risk factors associated with substance use among older teens relative to younger teens. This shift appears to coincide directly with the tendency of parents to pull back in their monitoring and their involvement in their adolescent children’s lives. Conclusion: The survey findings underscore the importance of resisting the urge to completely pull back as teens age and demand more independence since that is exactly when the risks for teen substance use spike and young people need their parents and other trusted adults to be involved more than ever. Particularly through the cultivation of a healthy, positive, and open relationship, parents can help teens receive accurate and credible information about substance use and also monitor their whereabouts and exposure to addictive substances. These findings, which come directly from teens themselves, demonstrate the importance of continued parental engagement throughout children’s lives, regardless of their age and the disincentives to remaining involved and connected.

Keywords: adolescent, parental monitoring, prevention, substance use

Procedia PDF Downloads 114
302 Sol-Gel Derived Yttria-Stabilized Zirconia Nanoparticles for Dental Applications: Synthesis and Characterization

Authors: Anastasia Beketova, Emmanouil-George C. Tzanakakis, Ioannis G. Tzoutzas, Eleana Kontonasaki

Abstract:

In restorative dentistry, yttria-stabilized zirconia (YSZ) nanoparticles can be applied as fillers to improve the mechanical properties of various resin-based materials. Using sol-gel based synthesis as simple and cost-effective method, nano-sized YSZ particles with high purity can be produced. The aim of this study was to synthesize YSZ nanoparticles by the Pechini sol-gel method at different temperatures and to investigate their composition, structure, and morphology. YSZ nanopowders were synthesized by the sol-gel method using zirconium oxychloride octahydrate (ZrOCl₂.8H₂O) and yttrium nitrate hexahydrate (Y(NO₃)₃.6H₂O) as precursors with the addition of acid chelating agents to control hydrolysis and gelation reactions. The obtained powders underwent TG_DTA analysis and were sintered at three different temperatures: 800, 1000, and 1200°C for 2 hours. Their composition and morphology were investigated by Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction Analysis (XRD), Scanning Electron Microscopy with associated with Energy Dispersive X-ray analyzer (SEM-EDX), Transmission Electron Microscopy (TEM) methods, and Dynamic Light Scattering (DLS). FTIR and XRD analysis showed the presence of pure tetragonal phase in the composition of nanopowders. By increasing the calcination temperature, the crystallinity of materials increased, reaching 47.2 nm for the YSZ1200 specimens. SEM analysis at high magnifications and DLS analysis showed submicron-sized particles with good dispersion and low agglomeration, which increased in size as the sintering temperature was elevated. From the TEM images of the YSZ1000 specimen, it can be seen that zirconia nanoparticles are uniform in size and shape and attain an average particle size of about 50 nm. The electron diffraction patterns clearly revealed ring patterns of polycrystalline tetragonal zirconia phase. Pure YSZ nanopowders have been successfully synthesized by the sol-gel method at different temperatures. Their size is small, and uniform, allowing their incorporation of dental luting resin cements to improve their mechanical properties and possibly enhance the bond strength of demanding dental ceramics such as zirconia to the tooth structure. This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme 'Human Resources Development, Education and Lifelong Learning 2014- 2020' in the context of the project 'Development of zirconia adhesion cements with stabilized zirconia nanoparticles: physicochemical properties and bond strength under aging conditions' (MIS 5047876).

Keywords: dental cements, nanoparticles, sol-gel, yttria-stabilized zirconia, YSZ

Procedia PDF Downloads 119
301 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 76
300 Inherent Difficulties in Countering Islamophobia

Authors: Imbesat Daudi

Abstract:

Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.

Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam

Procedia PDF Downloads 28
299 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 71
298 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 167
297 Comparison between the Quadratic and the Cubic Linked Interpolation on the Mindlin Plate Four-Node Quadrilateral Finite Elements

Authors: Dragan Ribarić

Abstract:

We employ the so-called problem-dependent linked interpolation concept to develop two cubic 4-node quadrilateral Mindlin plate finite elements with 12 external degrees of freedom. In the problem-independent linked interpolation, the interpolation functions are independent of any problem material parameters and the rotation fields are not expressed in terms of the nodal displacement parameters. On the contrary, in the problem-dependent linked interpolation, the interpolation functions depend on the material parameters and the rotation fields are expressed in terms of the nodal displacement parameters. Two cubic 4-node quadrilateral plate elements are presented, named Q4-U3 and Q4-U3R5. The first one is modelled with one displacement and two rotation degrees of freedom in every of the four element nodes and the second element has five additional internal degrees of freedom to get polynomial completeness of the cubic form and which can be statically condensed within the element. Both elements are able to pass the constant-bending patch test exactly as well as the non-zero constant-shear patch test on the oriented regular mesh geometry in the case of cylindrical bending. In any mesh shape, the elements have the correct rank and only the three eigenvalues, corresponding to the solid body motions are zero. There are no additional spurious zero modes responsible for instability of the finite element models. In comparison with the problem-independent cubic linked interpolation implemented in Q9-U3, the nine-node plate element, significantly less degrees of freedom are employed in the model while retaining the interpolation conformity between adjacent elements. The presented elements are also compared to the existing problem-independent quadratic linked-interpolation element Q4-U2 and to the other known elements that also use the quadratic or the cubic linked interpolation, by testing them on several benchmark examples. Simple functional upgrading from the quadratic to the cubic linked interpolation, implemented in Q4-U3 element, showed no significant improvement compared to the quadratic linked form of the Q4-U2 element. Only when the additional bubble terms are incorporated in the displacement and rotation function fields, which complete the full cubic linked interpolation form, qualitative improvement is fulfilled in the Q4-U3R5 element. Nevertheless, the locking problem exists even for the both presented elements, like in all pure displacement elements when applied to very thin plates modelled by coarse meshes. But good and even slightly better performance can be noticed for the Q4-U3R5 element when compared with elements from the literature, if the model meshes are moderately dense and the plate thickness not extremely thin. In some cases, it is comparable to or even better than Q9-U3 element which has as many as 12 more external degrees of freedom. A significant improvement can be noticed in particular when modeling very skew plates and models with singularities in the stress fields as well as circular plates with distorted meshes.

Keywords: Mindlin plate theory, problem-independent linked interpolation, problem-dependent interpolation, quadrilateral displacement-based plate finite elements

Procedia PDF Downloads 287
296 A Multi-Perspective, Qualitative Study into Quality of Life for Elderly People Living at Home and the Challenges for Professional Services in the Netherlands

Authors: Hennie Boeije, Renate Verkaik, Joke Korevaar

Abstract:

In Dutch national policy, it is promoted that the elderly remain living at home longer. They are less often admitted to a nursing home or only later in life. While living at home, it is important that they experience a good quality of life. Care providers in primary care support this. In this study, it was investigated what quality of life means for the elderly and which characteristics care should have that supports living at home longer with quality of life. To explore this topic, a qualitative methodology was used. Four focus groups were conducted: two with elderly people who live at home and their family caregivers, one with district nurses employed in-home care services and one with elderly care physicians working in primary care. Next to this individual interviews were employed with general practitioners (GPs). In total 32 participants took part in the study. The data were thematically analysed with MaxQDA software for qualitative analysis and reported. Quality of life is a multi-faceted term for elderly. The essence of their description is that they can still undertake activities that matter to them. Good physical health, mental well-being and social connections enable them to do this. Own control over their life is important for some. They are of opinion that how they experience life and manage old age is related to their resilience and coping. Key terms in the definitions of quality of life by GPs are also physical and mental health and social contacts. These are the three pillars. Next, to this elderly care, physicians mention security and safety and district nurses add control over their own life and meaningful daily activities. They agree that with frail elderly people, the balance is delicate and a change in one of the three pillars can cause it to collapse like a house of cards. When discussing what support is needed, professionals agree on access to care with a low threshold, prevention, and life course planning. When care is provided in a timely manner, a worsening of the situation can be prevented. They agree that hospital care often is not needed since most of the problems with the elderly have to do with care and security rather than with a cure per se. GPs can consult elderly care physicians to lower their workload and to bring in specific knowledge. District nurses often signal changes in the situation of the elderly. According to them, the elderly predominantly need someone to watch over them and provide them with a feeling of security. Life course planning and advance care planning can contribute to uniform treatment in line with older adults’ wishes. In conclusion, all stakeholders, including elderly persons, agree on what entails quality of life and the quality of care that is needed to support that. A future challenge is to shape conditions for the right skill mix of professionals, cooperation between the professions and breaking down differences in financing and supply. For the elderly, the challenge is preparing for aging.

Keywords: elderly living at home, quality of life, quality of care, professional cooperation, life course planning, advance care planning

Procedia PDF Downloads 109
295 Polymer Nanocomposite Containing Silver Nanoparticles for Wound Healing

Authors: Patrícia Severino, Luciana Nalone, Daniele Martins, Marco Chaud, Classius Ferreira, Cristiane Bani, Ricardo Albuquerque

Abstract:

Hydrogels produced with polymers have been used in the development of dressings for wound treatment and tissue revitalization. Our study on polymer nanocomposites containing silver nanoparticles shows antimicrobial activity and applications in wound healing. The effects are linked with the slow oxidation and Ag⁺ liberation to the biological environment. Furthermore, bacterial cell membrane penetration and metabolic disruption through cell cycle disarrangement also contribute to microbial cell death. The silver antimicrobial activity has been known for many years, and previous reports show that low silver concentrations are safe for human use. This work aims to develop a hydrogel using natural polymers (sodium alginate and gelatin) combined with silver nanoparticles for wound healing and with antimicrobial properties in cutaneous lesions. The hydrogel development utilized different sodium alginate and gelatin proportions (20:80, 50:50 and 80:20). The silver nanoparticles incorporation was evaluated at the concentrations of 1.0, 2.0 and 4.0 mM. The physico-chemical properties of the formulation were evaluated using ultraviolet-visible (UV-Vis) absorption spectroscopy, Fourier transform infrared (FTIR) spectroscopy, differential scanning calorimetry (DSC), and thermogravimetric (TG) analysis. The morphological characterization was made using transmission electron microscopy (TEM). Human fibroblast (L2929) viability assay was performed with a minimum inhibitory concentration (MIC) assessment as well as an in vivo cicatrizant test. The results suggested that sodium alginate and gelatin in the (80:20) proportion with 4 mM of AgNO₃ in the (UV-Vis) exhibited a better hydrogel formulation. The nanoparticle absorption spectra of this analysis showed a maximum band around 430 - 450 nm, which suggests a spheroidal form. The TG curve exhibited two weight loss events. DSC indicated one endothermic peak at 230-250 °C, due to sample fusion. The polymers acted as stabilizers of a nanoparticle, defining their size and shape. Human fibroblast viability assay L929 gave 105 % cell viability with a negative control, while gelatin presented 96% viability, alginate: gelatin (80:20) 96.66 %, and alginate 100.33 % viability. The sodium alginate:gelatin (80:20) exhibited significant antimicrobial activity, with minimal bacterial growth at a ratio of 1.06 mg.mL⁻¹ in Pseudomonas aeruginosa and 0.53 mg.mL⁻¹ in Staphylococcus aureus. The in vivo results showed a significant reduction in wound surface area. On the seventh day, the hydrogel-nanoparticle formulation reduced the total area of injury by 81.14 %, while control reached a 45.66 % reduction. The results suggest that silver-hydrogel nanoformulation exhibits potential for wound dressing therapeutics.

Keywords: nanocomposite, wound healing, hydrogel, silver nanoparticle

Procedia PDF Downloads 81
294 Organ Donation after Medical Aid in Dying: A Critical Study of Clinical Processes and Legal Rules in Place

Authors: Louise Bernier

Abstract:

Under some jurisdictions (including Canada), eligible patients can request and receive medical assistance in dying (MAiD) through lethal injections, inducing their cardiocirculatory death. Those same patients can also wish to donate their organs in the process. If they qualify as organ donors, a clinical and ethical rule called the 'dead donor rule' (DDR) requires the transplant teams to wait after cardiocirculatory death is confirmed, followed by a 'no touch' period (5 minutes in Canada) before they can proceed with organ removal. The medical procedures (lethal injections) as well as the delays associated with the DDR can damage organs (mostly thoracic organs) due to prolonged anoxia. Yet, strong scientific evidences demonstrate that operating differently and reconsidering the DDR would result in more organs of better quality available for transplant. This idea generates discomfort and resistance, but it is also worth considering, especially in a context of chronic shortage of available organs. One option that could be examined for MAiD’ patients who wish and can be organ donors would be to remove vital organs while patients are still alive (and under sedation). This would imply accepting that patient’s death would occur through organ donation instead of lethal injections required under MAiD’ legal rules. It would also mean that patients requesting MAiD and wishing to be organ donors could aspire to donate better quality organs, including their heart, an altruistic gesture that carries important symbolic value for many donors and their families. Following a patient centered approach, our hypothesis is that preventing vital organ donation from a living donor in all circumstance is neither perfectly coherent with how legal mentalities have evolved lately in the field of fundamental rights nor compatible with the clinical and ethical frameworks that shape the landscape in which those complex medical decisions unfold. Through a study of the legal, ethical, and clinical rules in place, both at the national and international levels, this analysis raises questions on the numerous inconsistencies associated with respecting the DDR with patients who have chosen to die through MAiD. We will begin with an assessment of the erosion of certain national legal frameworks that pertain to the sacred nature of the right to life which now also includes the right to choose how one wishes to die. We will then study recent innovative clinical protocols tested in different countries to help address acute organ shortage problems in creative ways. We will conclude this analysis with an ethical assessment of the situation, referring to principles such as justice, autonomy, altruism, beneficence, and non-malfeasance. This study will build a strong argument in favor of starting to allow vital organ donations from living donors in countries where MAiD is already permitted.

Keywords: altruism, autonomy, dead donor rule, medical assistance in dying, non-malfeasance, organ donation

Procedia PDF Downloads 153
293 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 111
292 The Development of Local-Global Perceptual Bias across Cultures: Examining the Effects of Gender, Education, and Urbanisation

Authors: Helen J. Spray, Karina J. Linnell

Abstract:

Local-global bias in adulthood is strongly dependent on environmental factors and a global bias is not the universal characteristic of adult perception it was once thought to be: whilst Western adults typically demonstrate a global bias, Namibian adults living in traditional villages possess a strong local bias. Furthermore, environmental effects on local-global bias have been shown to be highly gender-specific; whereas urbanisation promoted a global bias in urbanised Namibian women but not men, education promoted a global bias in urbanised Namibian men but not women. Adult populations, however, provide only a snapshot of the gene-environment interactions which shape perceptual bias. Yet, to date, there has been little work on the development of local-global bias across environmental settings. In the current study, local-global bias was assessed using a similarity-matching task with Navon figures in children aged between 4 and 15 years from across three populations: traditional Namibians, urban Namibians, and urban British. For the two Namibian groups, measures of urbanisation and education were obtained. Data were subjected to both between-group and within-group analyses. Between-group analyses compared developmental trajectories across population and gender. These analyses revealed a global bias from even as early as 4 in the British sample, and showed that the developmental onset of a global bias is not fixed. Urbanised Namibian children ultimately developed a global bias that was indistinguishable from British children; however, a global bias did not emerge until much later in development. For all populations, the greatest developmental effects were observed directly following the onset of formal education. No overall gender effects were observed; however, there was a significant gender by age interaction which was difficult to reconcile with existing biological-level accounts of gender differences in the development of local-global bias. Within-group analyses compared the effects of urbanisation and education on local-global bias for traditional and urban Namibian boys and girls separately. For both traditional and urban boys, education mediated all effects of age and urbanisation; however, this was not the case for girls. Traditional Namibian girls retained a local bias regardless of age, education, or urbanisation, and in urbanised girls, the development of a global bias was not attributable to any one factor specifically. These results are broadly consistent with aforementioned findings that education promoted a global bias in urbanised Namibian men but not women. The development of local-global bias does not follow a fixed trajectory but is subject to environmental control. Understanding how variability in the development of local-global bias might arise, particularly in the context of gender, may have far-reaching implications. For example, a number of educationally important cognitive functions (e.g., spatial ability) are known to show consistent gender differences in childhood and local-global bias may mediate some of these effects. With education becoming an increasingly prevalent force across much of the developing world it will be important to understand the processes that underpin its effects and their implications.

Keywords: cross-cultural, development, education, gender, local-global bias, perception, urbanisation, urbanization

Procedia PDF Downloads 119
291 Nudging the Criminal Justice System into Listening to Crime Victims in Plea Agreements

Authors: Dana Pugach, Michal Tamir

Abstract:

Most criminal cases end with a plea agreement, an issue whose many aspects have been discussed extensively in legal literature. One important feature, however, has gained little notice, and that is crime victims’ place in plea agreements following the federal Crime Victims Rights Act of 2004. This law has provided victims some meaningful and potentially revolutionary rights, including the right to be heard in the proceeding and a right to appeal against a decision made while ignoring the victim’s rights. While victims’ rights literature has always emphasized the importance of such right, references to this provision in the general literature about plea agreements are sparse, if existing at all. Furthermore, there are a few cases only mentioning this right. This article purports to bridge between these two bodies of legal thinking – the vast literature concerning plea agreements and victims’ rights research– by using behavioral economics. The article will, firstly, trace the possible structural reasons for the failure of this right to be materialized. Relevant incentives of all actors involved will be identified as well as their inherent consequential processes that lead to the victims’ rights malfunction. Secondly, the article will use nudge theory in order to suggest solutions that will enhance incentives for the repeat players in the system (prosecution, judges, defense attorneys) and lead to the strengthening of weaker group’s interests – the crime victims. Behavioral psychology literature recognizes that the framework in which an individual confronts a decision can significantly influence his decision. Richard Thaler and Cass Sunstein developed the idea of ‘choice architecture’ - ‘the context in which people make decisions’ - which can be manipulated to make particular decisions more likely. Choice architectures can be changed by adjusting ‘nudges,’ influential factors that help shape human behavior, without negating their free choice. The nudges require decision makers to make choices instead of providing a familiar default option. In accordance with this theory, we suggest a rule, whereby a judge should inquire the victim’s view prior to accepting the plea. This suggestion leaves the judge’s discretion intact; while at the same time nudges her not to go directly to the default decision, i.e. automatically accepting the plea. Creating nudges that force actors to make choices is particularly significant when an actor intends to deviate from routine behaviors but experiences significant time constraints, as in the case of judges and plea bargains. The article finally recognizes some far reaching possible results of the suggestion. These include meaningful changes to the earlier stages of criminal process even before reaching court, in line with the current criticism of the plea agreements machinery.

Keywords: plea agreements, victims' rights, nudge theory, criminal justice

Procedia PDF Downloads 303
290 Survival Analysis after a First Ischaemic Stroke Event: A Case-Control Study in the Adult Population of England.

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Stroke is associated with a significant risk of morbidity and mortality. There is scarcity of research on the long-term survival after first-ever ischaemic stroke (IS) events in England with regards to effects of different medical therapies and comorbidities. The objective of this study was to model the all-cause mortality after an IS diagnosis in the adult population of England. Using a retrospective case-control design, we extracted the electronic medical records of patients born prior to or in year 1960 in England with a first-ever ischaemic stroke diagnosis from January 1986 to January 2017 within the Health and Improvement Network (THIN) database. Participants with a history of ischaemic stroke were matched to 3 controls by sex and age at diagnosis and general practice. The primary outcome was the all-cause mortality. The hazards of the all-cause mortality were estimated using a Weibull-Cox survival model which included both scale and shape effects and a shared random effect of general practice. The model included sex, birth cohort, socio-economic status, comorbidities and medical therapies. 20,250 patients with a history of IS (cases) and 55,519 controls were followed up to 30 years. From 2008 to 2015, the one-year all-cause mortality for the IS patients declined with an absolute change of -0.5%. Preventive treatments to cases increased considerably over time. These included prescriptions of statins and antihypertensives. However, prescriptions for antiplatelet drugs decreased in the routine general practice since 2010. The survival model revealed a survival benefit of antiplatelet treatment to stroke survivors with hazard ratio (HR) of 0.92 (0.90 – 0.94). IS diagnosis had significant interactions with gender and age at entry and hypertension diagnosis. IS diagnosis was associated with high risk of all-cause mortality with HR= 3.39 (3.05-3.72) for cases compared to controls. Hypertension was associated with poor survival with HR = 4.79 (4.49 - 5.09) for hypertensive cases relative to non-hypertensive controls, though the detrimental effect of hypertension has not reached significance for hypertensive controls, HR = 1.19(0.82-1.56). This study of English primary care data showed that between 2008 and 2015, the rates of prescriptions of stroke preventive treatments increased, and a short-term all-cause mortality after IS stroke declined. However, stroke resulted in poor long-term survival. Hypertension, a modifiable risk factor, was found to be associated with poor survival outcomes in IS patients. Antiplatelet drugs were found to be protective to survival. Better efforts are required to reduce the burden of stroke through health service development and primary prevention.

Keywords: general practice, hazard ratio, health improvement network (THIN), ischaemic stroke, multiple imputation, Weibull-Cox model.

Procedia PDF Downloads 147
289 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 50
288 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 109
287 Assessment of Water Pollution in the River Nile (Egypt) by Applying Blood Biomarkers in Two Excellent Model Species Oreochromis niloticus niloticus and Clarias gariepinus

Authors: Alaa G. M. Osman, Abd-El –Baset M. Abd El Reheem, Khaled Y. Abouelfadl, Usama M. Mahmoud, Mohsen A. Moustafa

Abstract:

This study aimed to explore new sites of biomarker research and to establish the use of blood parameters in wild fish populations. Four hundred and twenty fish samples were collected from six sites along the whole course of the river Nile, Egypt. The mean values of erythrocytes, thrombocytes, hemoglobin concentration, hematocrit value, and mean corpuscular volume were significantly lower in the blood of Nile tilapia and African catfish collected from downstream (contaminated) compared to upstream sites. In contrast, mean corpuscular hemoglobin and mean corpuscular hemoglobin concentration in the peripheral blood of both fish species significantly increased from upstream to downstream river Nile. The leukocytes count was significantly decreased in contaminated sites compared to upstream area. Hematological variables in the peripheral blood of Oreochromis niloticus niloticus and Clarias gariepinus exhibited significant (p<0.05) correlation with nearly all the detected chemical and physical parameters along the Nile course. In the present study, lower cellular and nuclear areas and cellular and nuclear shape factor were recorded in the erythrocytes of fish collected from downstream compared to those caught from upstream sites. This was confirmed by higher immature ratios of red cells in the blood of fish sampled from downstream river Nile. Karyorrhetic and enucleated erythrocytes were significantly correlated with physiochemical parameters in water samples collected from the same sites is being higher in the blood of fish collected from downstream sites. To see if there was any correlation between fish altered physiological fitness and environmental stress, we measured serum biochemical variables namely; total protein, cholesterol, triglycerides, calcium, chlorides, alkaline phosphatase activity (ALP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), uric acid activity, creatinine, and serum glucose. The level of all the selected biochemical variables in the blood of O. niloticus niloticus and C. gariepinus were recorded to be significantly higher (p<0.05) in downstream sites. According to the present results, nearly all the detected haematological and blood biochemical variables are suitable indicators of contaminant exposure in O. niloticus niloticus and C. gariepinus. Also the detected erythrocytes malformations in blood collected from Nile tilapia and African catfish were proven to be suitable for bio-monitoring aquatic pollution. The results revealed species-specific differences in sensitivities, suggesting that Nile tilapia may serve as a more sensitive test species compared to African catfish.

Keywords: biomarkers, water pollution, blood parameters, river nile, african catfish, nile tilapia

Procedia PDF Downloads 268
286 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording

Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen

Abstract:

It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.

Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration

Procedia PDF Downloads 159
285 Teachers’ Instructional Decisions When Teaching Geometric Transformations

Authors: Lisa Kasmer

Abstract:

Teachers’ instructional decisions shape the structure and content of mathematics lessons and influence the mathematics that students are given the opportunity to learn. Therefore, it is important to better understand how teachers make instructional decisions and thus find new ways to help practicing and future teachers give their students a more effective and robust learning experience. Understanding the relationship between teachers’ instructional decisions and their goals, resources, and orientations (beliefs) is important given the heightened focus on geometric transformations in the middle school mathematics curriculum. This work is significant as the development and support of current and future teachers need more effective ways to teach geometry to their students. The following research questions frame this study: (1) As middle school mathematics teachers plan and enact instruction related to teaching transformations, what thinking processes do they engage in to make decisions about teaching transformations with or without a coordinate system and (2) How do the goals, resources and orientations of these teachers impact their instructional decisions and reveal about their understanding of teaching transformations? Teachers and students alike struggle with understanding transformations; many teachers skip or hurriedly teach transformations at the end of the school year. However, transformations are an important mathematical topic as this topic supports students’ understanding of geometric and spatial reasoning. Geometric transformations are a foundational concept in mathematics, not only for understanding congruence and similarity but for proofs, algebraic functions, and calculus etc. Geometric transformations also underpin the secondary mathematics curriculum, as features of transformations transfer to other areas of mathematics. Teachers’ instructional decisions in terms of goals, orientations, and resources that support these instructional decisions were analyzed using open-coding. Open-coding is recognized as an initial first step in qualitative analysis, where comparisons are made, and preliminary categories are considered. Initial codes and categories from current research on teachers’ thinking processes that are related to the decisions they make while planning and reflecting on the lessons were also noted. Surfacing ideas and additional themes common across teachers while seeking patterns, were compared and analyzed. Finally, attributes of teachers’ goals, orientations and resources were identified in order to begin to build a picture of the reasoning behind their instructional decisions. These categories became the basis for the organization and conceptualization of the data. Preliminary results suggest that teachers often rely on their own orientations about teaching geometric transformations. These beliefs are underpinned by the teachers’ own mathematical knowledge related to teaching transformations. When a teacher does not have a robust understanding of transformations, they are limited by this lack of knowledge. These shortcomings impact students’ opportunities to learn, and thus disadvantage their own understanding of transformations. Teachers’ goals are also limited by their paucity of knowledge regarding transformations, as these goals do not fully represent the range of comprehension a teacher needs to teach this topic well.

Keywords: coordinate plane, geometric transformations, instructional decisions, middle school mathematics

Procedia PDF Downloads 67