Search results for: inertial measurement units
339 The Impact of Insomnia on the Academic Performance of Mexican Medical Students: Gender Perspective
Authors: Paulina Ojeda, Damaris Estrella, Hector Rubio
Abstract:
Insomnia is a disorder characterized by difficulty falling asleep, staying asleep or both. It negatively affects the life quality of people, it hinders the concentration, attention, memory, motor skills, among other abilities that complicate work or learning. Some studies show that women are more susceptible to insomnia. Medicine curricula usually involve a great deal of theoretical and memory content, especially in the early years of the course. The way to accredit a university course is to demonstrate the level of competence or acquired knowledge. In Mexico the most widely used form of measurement is written exams, with numerical scales results. The prevalence of sleep disorders in university students is usually high, so it is important to know if insomnia has an effect on school performance in men and women. A cross-sectional study was designed that included a probabilistic sample of 118 regular students from the School of Medicine of the Autonomous University of Yucatan, Mexico. All on legally age. The project was authorized by the School of Medicine and all the ethical implications of the case were monitored. Participants completed anonymously the following questionnaires: Pittsburgh Sleep Quality Index, Insomnia Severity Index, AUDIT test, epidemiological and clinical data. Academic performance was assessed by the average number of official grades earned on written exams, as well as the number of approved or non-approved courses. These data were obtained officially through the corresponding school authorities. Students with at least one unapproved course or average less than 70 were considered to be poor performers. With all courses approved and average between 70-79 as regular performance and with an average of 80 or higher as a good performance. Statistical analysis: t-Student, difference of proportions and ANOVA. 65 men with a mean age of 19.15 ± 1.60 years and 53 women of 18.98 ± 1.23 years, were included. 96% of the women and 78.46% of the men sleep in the family home. 16.98% of women and 18.46% of men consume tobacco. Most students consume caffeinated beverages. 3.7% of the women and 10.76% of the men complete criteria of harmful consumption of alcohol. 98.11% of the women and 90.76% of the men are perceived with poor sleep quality. Insomnia was present in 73% of women and 66% of men. Women had higher levels of moderate insomnia (p=0.02) compared to men and only one woman had severe insomnia. 50.94% of the women and 44.61% of the men had poor academic performance. 18.86% of women and 27% of men performed well. Only in the group of women we found a significant association between poor performance with mild (p= 0.0035) and moderate (p=0.031) insomnia. The medical students reported poor sleep quality and insomnia. In women, levels of insomnia were associated with poor academic performance.Keywords: scholar-average, sex, sleep, university
Procedia PDF Downloads 296338 Layer-By-Layer Deposition of Poly (Amidoamine) and Poly (Acrylic Acid) on Grafted-Polylactide Nonwoven with Different Surface Charge
Authors: Sima Shakoorjavan, Mahdieh Eskafi, Dawid Stawski, Somaye Akbari
Abstract:
In this study, poly (amidoamine) dendritic material (PAMAM) and poly (acrylic acid) (PAA) as polycation and polyanion were deposited on surface charged polylactide (PLA) nonwoven to study the relationship of dye absorption capacity of layered-PLA with the number of deposited layers. To produce negatively charged-PLA, acrylic acid (AA) was grafted on the PLA surface (PLA-g-AA) through a chemical redox reaction with the strong oxidizing agent. Spectroscopy analysis, water contact measurement, and FTIR-ATR analysis confirm the successful grafting of AA on the PLA surface through the chemical redox reaction method. In detail, an increase in dye absorption percentage by 19% and immediate absorption of water droplets ensured hydrophilicity of PLA-g-AA surface; and the presence of new carbonyl bond at 1530 cm-¹ and a wide peak of hydroxyl between 3680-3130 cm-¹ confirm AA grafting. In addition, PLA as linear polyester can undergo aminolysis, which is the cleavage of ester bonds and replacement with amid bonds when exposed to an aminolysis agent. Therefore, to produce positively charged PLA, PAMAM as amine-terminated dendritic material was introduced to PLA molecular chains at different conditions; (1) at 60 C for 0.5, 1, 1.5, 2 hours of aminolysis and (2) at room temperature (RT) for 1, 2, 3, and 4 hours of aminolysis. Weight changes and spectrophotometer measurements showed a maximum in weight gain graph and K/S value curve indicating the highest PAMAM attachment at 60 C for 1 hour and RT for 2 hours which is considered as an optimum condition. Also, the emerging new peak around 1650 cm-1 corresponding to N-H bending vibration and double wide peak at around 3670-3170 cm-1 corresponding to N-H stretching vibration confirm PAMAM attachment in selected optimum condition. In the following, regarding the initial surface charge of grafted-PLA, lbl deposition was performed and started with PAA or PAMAM. FTIR-ATR results confirm chemical changes in samples due to deposition of the first layer (PAA or PAMAM). Generally, spectroscopy analysis indicated that an increase in layer number costed dye absorption capacity. It can be due to the partial deposition of a new layer on the previously deposited layer; therefore, the available PAMAM at the first layer is more than the third layer. In detail, in the case of layer-PLA starting lbl with negatively charged, having PAMAM as the first top layer (PLA-g-AA/PAMAM) showed the highest dye absorption of both cationic and anionic model dye.Keywords: surface modification, layer-by-layer technique, dendritic materials, PAMAM, dye absorption capacity, PLA nonwoven
Procedia PDF Downloads 85337 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 89336 Technological Affordances of a Mobile Fitness Application- A Role of Escapism and Social Outcome Expectation
Authors: Inje Cho
Abstract:
The leading health risks threatening the world today are associated with a modern lifestyle characterized by sedentary behavior, stress, anxiety, and an obesogenic food environment. To counter this alarming trend, the Centers for Disease Control and Prevention have proffered Physical Activity guidelines to bolster physical engagement. Concurrently, the burgeon of smartphones and mobile applications has witnessed a proliferation of fitness applications aimed at invigorating exercise adherence and real-time activity monitoring. Grounded in the Uses and gratification theory, this study delves into the technological affordances of mobile fitness applications, discerning the mediating influences of escapism and social outcome expectations on attitudes and exercise intention. The theory explains how individuals employ distinct communication mediums to satiate their exigencies and desires. Technological affordances manifest as attributes of emerging technologies that galvanize personal engagement in physical activities. Several features of mobile fitness applications include affordances for goal setting, virtual rewards, peer support, and exercise information. Escapism, denoting the inclination to disengage from normal routines, has emerged as a salient motivator for the consumption of new media. This study postulates that individual’s perceptions technological affordances within mobile fitness applications, can affect escapism and social outcome expectations, potentially influencing attitude, and behavior formation. Thus, the integrated model has been developed to empirically examine the interrelationships between technological affordances, escapism, social outcome expectations, and exercise intention. Structural Equation Modelling serves as the methodological tool, and a cohort of 400 Fitbit users shall be enlisted from the Prolific, data collection platform. A sequence of multivariate data analyses will scrutinize both the measurement and hypothesized structural models. By delving into the effects of mobile fitness applications, this study contributes to the growing of new media studies in sport management. Moreover, the novel integration of the uses and gratification theory, technological affordances, via the prism of escapism, illustrates the dynamics that underlies mobile fitness user’s attitudes and behavioral intentions. Therefore, the findings from this study contribute to theoretical understanding and provide pragmatic insights to developers and practitioners in optimizing the impact of mobile fitness applications.Keywords: technological affordances, uses and gratification, mobile fitness apps, escapism, physical activity
Procedia PDF Downloads 81335 Transportation and Urban Land-Use System for the Sustainability of Cities, a Case Study of Muscat
Authors: Bader Eddin Al Asali, N. Srinivasa Reddy
Abstract:
Cities are dynamic in nature and are characterized by concentration of people, infrastructure, services and markets, which offer opportunities for production and consumption. Often growth and development in urban areas is not systematic, and is directed by number of factors like natural growth, land prices, housing availability, job locations-the central business district (CBD’s), transportation routes, distribution of resources, geographical boundaries, administrative policies, etc. One sided spatial and geographical development in cities leads to the unequal spatial distribution of population and jobs, resulting in high transportation activity. City development can be measured by the parameters such as urban size, urban form, urban shape, and urban structure. Urban Size is the city size and defined by the population of the city, and urban form is the location and size of the economic activity (CBD) over the geographical space. Urban shape is the geometrical shape of the city over which the distribution of population and economic activity occupied. And Urban Structure is the transport network within which the population and activity centers are connected by hierarchy of roads. Among the urban land-use systems transportation plays significant role and is one of the largest energy consuming sector. Transportation interaction among the land uses is measured in Passenger-Km and mean trip length, and is often used as a proxy for measurement of energy consumption in transportation sector. Among the trips generated in cities, work trips constitute more than 70 percent. Work trips are originated from the place of residence and destination to the place of employment. To understand the role of urban parameters on transportation interaction, theoretical cities of different size and urban specifications are generated through building block exercise using a specially developed interactive C++ programme and land use transportation modeling is carried. The land-use transportation modeling exercise helps in understanding the role of urban parameters and also to classify the cities for their urban form, structure, and shape. Muscat the capital city of Oman underwent rapid urbanization over the last four decades is taken as a case study for its classification. Also, a pilot survey is carried to capture urban travel characteristics. Analysis of land-use transportation modeling with field data classified Muscat as a linear city with polycentric CBD. Conclusions are drawn suggestion are given for policy making for the sustainability of Muscat City.Keywords: land-use transportation, transportation modeling urban form, urban structure, urban rule parameters
Procedia PDF Downloads 270334 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices
Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto
Abstract:
The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem
Procedia PDF Downloads 122333 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation
Procedia PDF Downloads 245332 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework
Authors: Robert Pocklington
Abstract:
Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language
Procedia PDF Downloads 110331 Perceived Restorativeness Scale– 6: A Short Version of the Perceived Restorativeness Scale for Mixed (or Mobile) Devices
Authors: Sara Gallo, Margherita Pasini, Margherita Brondino, Daniela Raccanello, Roberto Burro, Elisa Menardo
Abstract:
Most of the studies on the ability of environments to recover people’s cognitive resources have been conducted in laboratory using simulated environments (e.g., photographs, videos, or virtual reality), based on the implicit assumption that exposure to simulated environments has the same effects of exposure to real environments. However, the technical characteristics of simulated environments, such as the dynamic or static characteristics of the stimulus, critically affect their perception. Measuring perceived restorativeness in situ rather than in laboratory could increase the validity of the obtained measurements. Personal mobile devices could be useful because they allow accessing immediately online surveys when people are directly exposed to an environment. At the same time, it becomes important to develop short and reliable measuring instruments that allow a quick assessment of the restorative qualities of the environments. One of the frequently used self-report measures to assess perceived restorativeness is the “Perceived Restorativeness Scale” (PRS) based on Attention Restoration Theory. A lot of different versions have been proposed and used according to different research purposes and needs, without studying their validity. This longitudinal study reported some preliminary validation analyses on a short version of original scale, the PRS-6, developed to be quick and mobile-friendly. It is composed of 6 items assessing fascination and being-away. 102 Italian university students participated to the study, 84% female with age ranging from 18 to 47 (M = 20.7; SD = 2.9). Data were obtained through a survey online that asked them to report their perceived restorativeness of the environment they were in (and the kind of environment) and their positive emotion (Positive and Negative Affective Schedule, PANAS) once a day for seven days. Cronbach alpha and item-total correlations were used to assess reliability and internal consistency. Confirmatory Factor Analyses (CFA) models were run to study the factorial structure (construct validity). Correlation analyses between PRS and PANAS scores were used to check discriminant validity. In the end, multigroup CFA models were used to study measurement invariance (configural, metric, scalar, strict) between different mobile devices and between day of assessment. On the whole, the PRS-6 showed good psychometric proprieties, similar to those of the original scale, and invariance across devices and days. These results suggested that the PRS-6 could be a valid alternative to assess perceived restorativeness when researchers need a brief and immediate evaluation of the recovery quality of an environment.Keywords: restorativeness, validation, short scale development, psychometrics proprieties
Procedia PDF Downloads 254330 Altruistic and Hedonic Motivations to Write eWOM Reviews on Hotel Experience
Authors: Miguel Llorens-Marin, Adolfo Hernandez, Maria Puelles-Gallo
Abstract:
The increasing influence of Online Travel Agencies (OTAs) on hotel bookings and the electronic word-of-mouth (eWOM) contained in them has been featured by many scientific studies as a major factor in the booking decision. The main reason is that nowadays, in the hotel sector, consumers first come into contact with the offer through the web and the online environment. Due to the nature of the hotel product and the fact that it is booked in advance to actually seeing it, there is a lack of knowledge about its actual features. This makes eWOM a major channel to help consumers to reduce their perception of risk when making their booking decisions. This research studies the relationship between aspects of customer influenceability by reading eWOM communications, at the time of booking a hotel, with the propensity to write a review. In other words, to test relationships between the reading and the writing of eWOM. Also investigates the importance of different underlying motivations for writing eWOM. Online surveys were used to obtain the data from a sample of hotel customers, with 739 valid questionnaires. A measurement model and Path analysis were carried out to analyze the chain of relationships among the independent variable (influenceability from reading reviews) and the dependent variable (propensity to write a review) with the mediating effects of additional variables, which help to explain the relationship. The authors also tested the moderating effects of age and gender in the model. The study considered three different underlying motivations for writing a review on a hotel experience, namely hedonic, altruistic and conflicted. Results indicate that the level of influenceability by reading reviews has a positive effect on the propensity to write reviews; therefore, we manage to link the reading and the writing of reviews. Authors also discover that the main underlying motivation to write a hotel review is the altruistic motivation, being the one with the higher Standard regression coefficient above the hedonic motivation. The authors suggest that the propensity to write reviews is not related to sociodemographic factors (age and gender) but to attitudinal factors such as ‘the most influential factor when reading’ and ‘underlying motivations to write. This gives light on the customer engagement motivations to write reviews. The implications are that managers should encourage their customers to write eWOM reviews on altruistic grounds to help other customers to make a decision. The most important contribution of this work is to link the effect of reading hotel reviews with the propensity to write reviews.Keywords: hotel reviews, electronic word-of-mouth (eWOM), online consumer reviews, digital marketing, social media
Procedia PDF Downloads 102329 Crustal Scale Seismic Surveys in Search for Gawler Craton Iron Oxide Cu-Au (IOCG) under Very Deep Cover
Authors: E. O. Okan, A. Kepic, P. Williams
Abstract:
Iron oxide copper gold (IOCG) deposits constitute important sources of copper and gold in Australia especially since the discovery of the supergiant Olympic Dam deposits in 1975. They are considered to be metasomatic expressions of large crustal-scale alteration events occasioned by intrusive actions and are associated with felsic igneous rocks in most cases, commonly potassic igneous magmatism, with the deposits ranging from ~2.2 –1.5 Ga in age. For the past two decades, geological, geochemical and potential methods have been used to identify the structures hosting these deposits follow up by drilling. Though these methods have largely been successful for shallow targets, at deeper depth due to low resolution they are limited to mapping only very large to gigantic deposits with sufficient contrast. As the search for ore-bodies under regolith cover continues due to depletion of the near surface deposits, there is a compelling need to develop new exploration technology to explore these deep seated ore-bodies within 1-4km which is the current mining depth range. Seismic reflection method represents this new technology as it offers a distinct advantage over all other geophysical techniques because of its great depth of penetration and superior spatial resolution maintained with depth. Further, in many different geological scenarios, it offers a greater ‘3D mapability’ of units within the stratigraphic boundary. Despite these superior attributes, no arguments for crustal scale seismic surveys have been proposed because there has not been a compelling argument of economic benefit to proceed with such work. For the seismic reflection method to be used at these scales (100’s to 1000’s of square km covered) the technical risks or the survey costs have to be reduced. In addition, as most IOCG deposits have large footprint due to its association with intrusions and large fault zones; we hypothesized that these deposits can be found by mainly looking for the seismic signatures of intrusions along prospective structures. In this study, we present two of such cases: - Olympic Dam and Vulcan iron-oxide copper-gold (IOCG) deposits all located in the Gawler craton, South Australia. Results from our 2D modelling experiments revealed that seismic reflection surveys using 20m geophones and 40m shot spacing as an exploration tool for locating IOCG deposit is possible even when hosted in very complex structures. The migrated sections were not only able to identify and trace various layers plus the complex structures but also show reflections around the edges of intrusive packages. The presences of such intrusions were clearly detected from 100m to 1000m depth range without losing its resolution. The modelled seismic images match the available real seismic data and have the hypothesized characteristics; thus, the seismic method seems to be a valid exploration tool to find IOCG deposits. We therefore propose that 2D seismic survey is viable for IOCG exploration as it can detect mineralised intrusive structures along known favourable corridors. This would help in reducing the exploration risk associated with locating undiscovered resources as well as conducting a life-of-mine study which will enable better development decisions at the very beginning.Keywords: crustal scale, exploration, IOCG deposit, modelling, seismic surveys
Procedia PDF Downloads 326328 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 134327 Determining the Thermal Performance and Comfort Indices of a Naturally Ventilated Room with Reduced Density Reinforced Concrete Wall Construction over Conventional M-25 Grade Concrete
Authors: P. Crosby, Shiva Krishna Pavuluri, S. Rajkumar
Abstract:
Purpose: Occupied built-up space can be broadly classified as air-conditioned and naturally ventilated. Regardless of the building type, the objective of all occupied built-up space is to provide a thermally acceptable environment for human occupancy. Considering this aspect, air-conditioned spaces allow a greater degree of flexibility to control and modulate the comfort parameters during the operation phase. However, in the case of naturally ventilated space, a number of design features favoring indoor thermal comfort should be mandatorily conceptualized starting from the design phase. One such primary design feature that requires to be prioritized is, selection of building envelope material, as it decides the flow of energy from outside environment to occupied spaces. Research Methodology: In India and many countries across globe, the standardized material used for building envelope is re-enforced concrete (i.e. M-25 grade concrete). The comfort inside the RC built environment for warm & humid climate (i.e. mid-day temp of 30-35˚C, diurnal variation of 5-8˚C & RH of 70-90%) is unsatisfying to say the least. This study is mainly focused on reviewing the impact of mix design of conventional M25 grade concrete on inside thermal comfort. In this mix design, air entrainment in the range of 2000 to 2100 kg/m3 is introduced to reduce the density of M-25 grade concrete. Thermal performance parameters & indoor comfort indices are analyzed for the proposed mix and compared in relation to the conventional M-25 grade. There are diverse methodologies which govern indoor comfort calculation. In this study, three varied approaches specifically a) Indian Adaptive Thermal comfort model, b) Tropical Summer Index (TSI) c) Air temperature less than 33˚C & RH less than 70% to calculate comfort is adopted. The data required for the thermal comfort study is acquired by field measurement approach (i.e. for the new mix design) and simulation approach by using design builder (i.e. for the conventional concrete grade). Findings: The analysis points that the Tropical Summer Index has a higher degree of stringency in determining the occupant comfort band whereas also providing a leverage in thermally tolerable band over & above other methodologies in the context of the study. Another important finding is the new mix design ensures a 10% reduction in indoor air temperature (IAT) over the outdoor dry bulb temperature (ODBT) during the day. This translates to a significant temperature difference of 6 ˚C IAT and ODBT.Keywords: Indian adaptive thermal comfort, indoor air temperature, thermal comfort, tropical summer index
Procedia PDF Downloads 321326 Vapour Liquid Equilibrium Measurement of CO₂ Absorption in Aqueous 2-Aminoethylpiperazine (AEP)
Authors: Anirban Dey, Sukanta Kumar Dash, Bishnupada Mandal
Abstract:
Carbondioxide (CO2) is a major greenhouse gas responsible for global warming and fossil fuel power plants are the main emitting sources. Therefore the capture of CO2 is essential to maintain the emission levels according to the standards. Carbon capture and storage (CCS) is considered as an important option for stabilization of atmospheric greenhouse gases and minimizing global warming effects. There are three approaches towards CCS: Pre combustion capture where carbon is removed from the fuel prior to combustion, Oxy-fuel combustion, where coal is combusted with oxygen instead of air and Post combustion capture where the fossil fuel is combusted to produce energy and CO2 is removed from the flue gases left after the combustion process. Post combustion technology offers some advantage as existing combustion technologies can still be used without adopting major changes on them. A number of separation processes could be utilized part of post –combustion capture technology. These include (a) Physical absorption (b) Chemical absorption (c) Membrane separation (d) Adsorption. Chemical absorption is one of the most extensively used technologies for large scale CO2 capture systems. The industrially important solvents used are primary amines like Monoethanolamine (MEA) and Diglycolamine (DGA), secondary amines like diethanolamine (DEA) and Diisopropanolamine (DIPA) and tertiary amines like methyldiethanolamine (MDEA) and Triethanolamine (TEA). Primary and secondary amines react fast and directly with CO2 to form stable carbamates while Tertiary amines do not react directly with CO2 as in aqueous solution they catalyzes the hydrolysis of CO2 to form a bicarbonate ion and a protonated amine. Concentrated Piperazine (PZ) has been proposed as a better solvent as well as activator for CO2 capture from flue gas with a 10 % energy benefit compared to conventional amines such as MEA. However, the application of concentrated PZ is limited due to its low solubility in water at low temperature and lean CO2 loading. So following the performance of PZ its derivative 2-Aminoethyl piperazine (AEP) which is a cyclic amine can be explored as an activator towards the absorption of CO2. Vapour liquid equilibrium (VLE) in CO2 capture systems is an important factor for the design of separation equipment and gas treating processes. For proper thermodynamic modeling accurate equilibrium data for the solvent system over a wide range of temperatures, pressure and composition is essential. The present work focuses on the determination of VLE data for (AEP + H2O) system at 40 °C for various composition range.Keywords: absorption, aminoethyl piperazine, carbondioxide, vapour liquid equilibrium
Procedia PDF Downloads 269325 Calcium Release- Activated Calcium Channels as a Target in Treatment of Allergic Asthma
Authors: Martina Šutovská, Marta Jošková, Ivana Kazimierová, Lenka Pappová, Maroš Adamkov, Soňa Fraňová
Abstract:
Bronchial asthma is characterized by increased bronchoconstrictor responses to provoking agonists, airway inflammation and remodeling. All these processes involve Ca2+ influx through Ca2+-release-activated Ca2+ channels (CRAC) that are widely expressed in immune, respiratory epithelium and airway smooth muscle (ASM) cells. Our previous study pointed on possible therapeutic potency of CRAC blockers using experimental guinea pigs asthma model. Presented work analyzed complex anti-asthmatic effect of long-term administered CRAC blocker, including impact on allergic inflammation, airways hyperreactivity, and remodeling and mucociliary clearance. Ovalbumin-induced allergic inflammation of the airways according to Franova et al. was followed by 14 days lasted administration of CRAC blocker (3-fluoropyridine-4-carboxylic acid, FPCA) in the dose 1.5 mg/kg bw. For comparative purposes salbutamol, budesonide and saline were applied to control groups. The anti-inflammatory effect of FPCA was estimated by serum and bronchoalveolar lavage fluid (BALF) changes in IL-4, IL-5, IL-13 and TNF-α analyzed by Bio-Plex® assay as well as immunohistochemical staining focused on assessment of tryptase and c-Fos positivity in pulmonary samples. The in vivo airway hyperreactivity was evaluated by Pennock et al. and by organ tissue bath methods in vitro. The immunohistochemical changes in ASM actin and collagen III layer as well as mucin secretion evaluated anti-remodeling effect of FPCA. The measurement of ciliary beat frequency (CBF) in vitro using LabVIEW™ Software determined impact on mucociliary clearance. Long-term administration of FPCA to sensitized animals resulted in: i. Significant decrease in cytokine levels, tryptase and c-Fos positivity similar to budesonide effect; ii.Meaningful decrease in basal and bronchoconstrictors-induced in vivo and in vitro airway hyperreactivity comparable to salbutamol; iii. Significant inhibition of airway remodeling parameters; iv. Insignificant changes in CBF. All these findings confirmed complex anti-asthmatic effect of CRAC channels blocker and evidenced these structures as the rational target in the treatment of allergic bronchial asthma.Keywords: allergic asthma, CRAC channels, cytokines, respiratory epithelium
Procedia PDF Downloads 522324 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 146323 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach
Authors: Razaq A. Olaitan, Ayansina Ayanlade
Abstract:
Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.Keywords: aerosol loading, aerosol types, health risks, optical properties
Procedia PDF Downloads 64322 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings
Authors: Youlu Huang, Huanjun Jiang
Abstract:
A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.Keywords: old buildings, adding story, seismic strengthening, seismic performance
Procedia PDF Downloads 123321 Bank Internal Controls and Credit Risk in Europe: A Quantitative Measurement Approach
Authors: Ellis Kofi Akwaa-Sekyi, Jordi Moreno Gené
Abstract:
Managerial actions which negatively profile banks and impair corporate reputation are addressed through effective internal control systems. Disregard for acceptable standards and procedures for granting credit have affected bank loan portfolios and could be cited for the crises in some European countries. The study intends to determine the effectiveness of internal control systems, investigate whether perceived agency problems exist on the part of board members and to establish the relationship between internal controls and credit risk among listed banks in the European Union. Drawing theoretical support from the behavioural compliance and agency theories, about seventeen internal control variables (drawn from the revised COSO framework), bank-specific, country, stock market and macro-economic variables will be involved in the study. A purely quantitative approach will be employed to model internal control variables covering the control environment, risk management, control activities, information and communication and monitoring. Panel data from 2005-2014 on listed banks from 28 European Union countries will be used for the study. Hypotheses will be tested and the Generalized Least Squares (GLS) regression will be run to establish the relationship between dependent and independent variables. The Hausman test will be used to select whether random or fixed effect model will be used. It is expected that listed banks will have sound internal control systems but their effectiveness cannot be confirmed. A perceived agency problem on the part of the board of directors is expected to be confirmed. The study expects significant effect of internal controls on credit risk. The study will uncover another perspective of internal controls as not only an operational risk issue but credit risk too. Banks will be cautious that observing effective internal control systems is an ethical and socially responsible act since the collapse (crisis) of financial institutions as a result of excessive default is a major contagion. This study deviates from the usual primary data approach to measuring internal control variables and rather models internal control variables in a quantitative approach for the panel data. Thus a grey area in approaching the revised COSO framework for internal controls is opened for further research. Most bank failures and crises could be averted if effective internal control systems are religiously adhered to.Keywords: agency theory, credit risk, internal controls, revised COSO framework
Procedia PDF Downloads 320320 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique
Authors: Igor V. Savchenko, Dmitrii A. Samoshkin
Abstract:
The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity
Procedia PDF Downloads 201319 Jordan, Towards Eliminating Preventable Maternal Deaths
Authors: Abdelmanie Suleimat, Nagham Abu Shaqra, Sawsan Majali, Issam Adawi, Heba Abo Shindi, Anas Al Mohtaseb
Abstract:
The Government of Jordan recognizes that maternal mortality constitutes a grave public health problem. Over the past two decades, there has been significant progress in improving the quality of maternal health services, resulting in improved maternal and child health outcomes. Despite these efforts, measurement and analysis of maternal mortality remained a challenge, with significant discrepancies from previous national surveys that inhibited accuracy. In response with support from USAID, the Jordan Maternal Mortality Surveillance Response (JMMSR) System was established to collect, analyze, and equip policymakers with data for decision-making guided by interdisciplinary multi-levelled advisory groups aiming to eliminate preventable maternal deaths, A 2016 Public Health Bylaw required the notification of deaths among women of reproductive age. The JMMSR system was launched in 2018 and continues annually, analyzing data received from health facilities, to guide policy to prevent avoidable deaths. To date, there have been four annual national maternal mortality reports (2018-2021). Data is collected, reviewed by advisory groups, and then consolidated in an annual report to inform and guide the Ministry of Health (MOH); JMMSR collects the necessary information to calculate an accurate maternal mortality ratio and assists in identifying leading causes and contributing factors for each maternal death. Based on this data, national response plans are created. A monitoring and evaluation plan was designed to define, track, and improve implementation through indicators. Over the past four years, one of these indicators, ‘percent of facilities notifying respective health directorates of all deaths of women of reproductive age,’ increased annually from 82.16%, 92.95%, and 92.50% to 97.02%, respectively. The Government of Jordan demonstrated commitment to the JMMSR system by designating the MOH to primarily host the system and lead the development and dissemination of policies and procedures to standardize implementation. The data was translated into practical and evidence-based recommendations. The successful impact of results deepened the understanding of maternal mortality in Jordan, which convinced the MOH to amend the Bylaw now mandating electronic reporting of all births and neonatal deaths from health facilities to empower the JMMSR system, by developing a stillbirths and neonatal mortality surveillance and response system.Keywords: maternal health, maternal mortality, preventable maternal deaths, maternal morbidity
Procedia PDF Downloads 40318 Volume Estimation of Trees: An Exploratory Study on Rosewood Logging Within Forest Transition and Savannah Ecological Zones of Ghana
Authors: Albert Kwabena Osei Konadu
Abstract:
One of the endemic forest species of the savannah transition zones enlisted by the Convention of International Treaty for Endangered Species (CITES) in Appendix II is the Rosewood, also known as Pterocarpus erinaceus or Krayie. Its economic viability has made it increasingly popular and in high demand. Ghana’s forest resource management regime for these ecozones is mainly on conservation and very little on resource utilization. Consequently, commercial logging management standards are at teething stage and not fully developed, leading to a deficiency in the monitoring of logging operations and quantification of harvested trees volumes. Tree information form (TIF); a volume estimation and tracking regime, has proven to be an effective sustainable management tool for regulating timber resource extraction in the high forest zones of the country. This work aims to generate TIF that can track and capture requisite parameters to accurately estimate the volume of harvested rosewood within forest savannah transition zones. Tree information forms were created on three scenarios of individual billets, stacked billets and conveying vessel basis. The study was limited by the usage of regulators assigned volume as benchmark and also fraught with potential volume measurement error in the stacked billet scenario due to the existence of spaces within packed billets. These TIFs were field-tested to deduce the most viable option for the tracking and estimation of harvested volumes of rosewood using the smallian and cubic volume estimation formula. Overall, four districts were covered with individual billets, stacked billets and conveying vessel scenarios registering mean volumes of 25.83m3,45.08m3 and 32.6m3, respectively. These adduced volumes were validated by benchmarking to assigned volumes of the Forestry Commission of Ghana and known standard volumes of conveying vessels. The results did indicate an underestimation of extracted volumes under the quotas regime, a situation that could lead to unintended overexploitation of the species. The research revealed conveying vessels route is the most viable volume estimation and tracking regime for the sustainable management of the Pterocarpous erinaceus species as it provided a more practical volume estimate and data extraction protocol.Keywords: cubic volume formula, smallian volume formula, pterocarpus erinaceus, tree information form, forest transition and savannah zones, harvested tree volume
Procedia PDF Downloads 44317 The Importance of Fruit Trees for Prescribed Burning in a South American Savanna
Authors: Rodrigo M. Falleiro, Joaquim P. L. Parime, Luciano C. Santos, Rodrigo D. Silva
Abstract:
The Cerrado biome is the most biodiverse savanna on the planet. Located in central Brazil, its preservation is seriously threatened by the advance of intensive agriculture and livestock. Conservation Units and Indigenous Lands are increasingly isolated and subject to mega wildfires. Among the characteristics of this savanna, we highlight the high rate of primary biomass production and the reduced occurrence of large grazing animals. In this biome, the predominant fauna is more dependent on the fruits produced by the dicotyledonous species in relation to other tropical savannas. Fire is a key element in the balance between mono and dicotyledons or between the arboreal and herbaceous strata. Therefore, applying fire regimes that maintain the balance between these strata without harming fruit production is essential in the conservation strategies of Cerrado's biodiversity. Recently, Integrated Fire Management has started to be implemented in Brazilian protected areas. As a result, management with prescribed burns has increasingly replaced strategies based on fire exclusion, which in practice have resulted in large wildfires, with highly negative impacts on fruit and fauna production. In the Indigenous Lands, these fires were carried out respecting traditional knowledge. The indigenous people showed great concern about the effects of fire on fruit plants and important animals. They recommended that the burns be carried out between April and May, as it would result in a greater production of edible fruits ("fruiting burning"). In other tropical savannas in the southern hemisphere, the preferential period tends to be later, in the middle of the dry season, when the grasses are dormant (June to August). However, in the Cerrado, this late period coincides with the flowering and sprouting of several important fruit species. To verify the best burning season, the present work evaluated the effects of fire on flowering and fruit production of theByrsonima sp., Mouriri pusa, Caryocar brasiliense, Anacardium occidentale, Pouteria ramiflora, Hancornia speciosa, Byrsonima verbascifolia, Anacardium humille and Talisia subalbens. The evaluations were carried out in the field, covering 31 Indigenous Lands that cover 104,241.18 Km², where 3,386 prescribed burns were carried out between 2015 and 2018. The burning periods were divided into early (carried out during the rainy season), modal or “fruiting” (carried out during the transition between seasons) and late (carried out in the middle of the dry season, when the grasses are dormant). The results corroborate the traditional knowledge, demonstrating that the modal burns result in higher rates of reproduction and fruit production. Late burns showed intermediate results, followed by early burns. We conclude that management strategies based mainly on forage production, which are usually applied in savannas populated by grazing ungulates, may not be the best management strategy for South American savannas. The effects of fire on fruit plants, which have a particular phenologicalsynchronization with the fauna cycle, also need to be observed during the prescription of burns.Keywords: cerrado biome, fire regimes, native fruits, prescribed burns
Procedia PDF Downloads 218316 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 93315 Relationship between Pushing Behavior and Subcortical White Matter Lesion in the Acute Phase after Stroke
Authors: Yuji Fujino, Kazu Amimoto, Kazuhiro Fukata, Masahide Inoue, Hidetoshi Takahashi, Shigeru Makita
Abstract:
Aim: Pusher behavior (PB) is a disorder in which stroke patients shift their body weight toward the affected side of the body (the hemiparetic side) and push away from the non-hemiparetic side. These patients often use further pushing to resist any attempts to correct their position to upright. It is known that the subcortical white matter lesion (SWML) usually correlates of gait or balance function in stroke patients. However, it is unclear whether the SWML influences PB. The purpose of this study was to investigate if the damage of SWML affects the severity of PB on acute stroke patients. Methods: Fourteen PB patients without thalamic or cortical lesions (mean age 73.4 years, 17.5 days from onset) participated in this study. Evaluation of PB was performed according to the Scale for Contraversive Pushing (SCP) for sitting and/or standing. We used modified criteria wherein the SCP subscale scores in each section of the scale were >0. As a clinical measurement, patients were evaluated by the Stroke Impairment Assessment Set (SIAS). For the depiction of SWML, we used T2-weighted fluid-attenuated inversion-recovery imaging. The degree of damage on SWML was assessed using the Fazekas scale. Patients were divided into two groups in the presence of SWML (SWML+ group; Fazekas scale grade 1-3, SWML- group; Fazekas scale grade 0). The independent t-test was used to compare the SCP and SIAS. This retrospective study was approved by the Ethics Committee. Results: In SWML+ group, the SCP was 3.7±1.0 points (mean±SD), the SIAS was 28.0 points (median). In SWML- group, the SCP was 2.0±0.2 points, and the SIAS was 31.5 points. The SCP was significantly higher in SWML+ group than in SWML- group (p<0.05). The SIAS was not significant in both groups (p>0.05). Discussion: It has been considered that the posterior thalamus is the neural structures that process the afferent sensory signals mediating graviceptive information about upright body orientation in humans. Therefore, many studies reported that PB was typically associated with unilateral lesions of the posterior thalamus. However, the result indicates that these extra-thalamic brain areas also contribute to the network controlling upright body posture. Therefore, SMWL might induce dysfunction through malperfusion in distant thalamic or other structurally intact neural structures. This study had a small sample size. Therefore, future studies should be performed with a large number of PB patients. Conclusion: The present study suggests that SWML can be definitely associated with PB. The patients with SWML may be severely incapacitating.Keywords: pushing behavior, subcortical white matter lesion, acute phase, stroke
Procedia PDF Downloads 246314 Neurodiversity in Post Graduate Medical Education: A Rapid Solution to Faculty Development
Authors: Sana Fatima, Paul Sadler, Jon Cooper, David Mendel, Ayesha Jameel
Abstract:
Background: Neurodiversity refers to intrinsic differences between human minds and encompasses dyspraxia, dyslexia, attention deficit hyperactivity disorder, dyscalculia, autism spectrum disorder, and Tourette syndrome. There is increasing recognition of neurodiversity in relation to disability/diversity in medical education and the associated impact on training, career progression, and personal and professional wellbeing. In addition, documented and anecdotal evidence suggests that medical educators and training providers in all four nations (UK) are increasingly concerned about understanding neurodiversity and identifying and providing support for neurodivergent trainees. Summary of Work: A national Neurodiversity Task and Finish group were established to survey Health Education England local office Professional Support teams about insights into infrastructure, training for educators, triggers for assessment, resources, and intervention protocols. This group drew from educational leadership, professional and personal neurodiverse expertise, occupational medicine, employer human resource, and trainees. An online, exploratory survey was conducted to gather insights from supervisors and trainers across England using the Professional Support Units' platform. Summary of Results: This survey highlighted marked heterogeneity in the identification, assessment, and approaches to support and management of neurodivergent trainees and highlighted a 'deficit' approach to neurodiversity. It also demonstrated a paucity of educational and protocol resources for educators and supervisors in supporting neurodivergent trainees. Discussions and Conclusions: In phase one, we focused on faculty development. An educational repository for all supervising trainees using a thematic approach was formalised. This was guided by our survey findings specific for neurodiversity and took a triple 'A' approach: awareness, assessment, and action. This is further supported by video material incorporating stories in training as well as mobile workshops for trainers for more immersive learning. The subtle theme from both the survey and Task and finish group suggested a move away from deficit-focused methods toward a positive holistic, interdisciplinary approach within a biopsychosocial framework. Contributions: 1. Faculty Knowledge and basic understanding of neurodiversity are key to supporting trainees with known or underlying Neurodiverse conditions. This is further complicated by challenges around non-disclosure, varied presentations, stigma, and intersectionality. 2. There is national (and international) inconsistency in the approach to how trainees are managed once a neurodiverse condition is suspected or diagnosed. 3. A carefully constituted and focussed Task and Finish group can rapidly identify national inconsistencies in neurodiversity and implement rapid educational interventions. 4. Nuanced findings from surveys and discussion can reframe the approach to neurodiversity; from a medical model to a more comprehensive, asset-based, biopsychosocial model of support, fostering a cultural shift, accepting 'diversity' in all its manifestations, visible and hidden.Keywords: neurodiversity, professional support, human considerations, workplace wellbeing
Procedia PDF Downloads 91313 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 387312 Analysis of the Keys Indicators of Sustainable Tourism: A Case Study in Lagoa da Confusão/to/Brazil
Authors: Veruska C. Dutra, Lucio F.M. Adorno, Mary L. G. S. Senna
Abstract:
Since it recognized the importance of planning sustainable tourism, which has been discussed effective methods of monitoring tourist. In this sense, the indicators, can transmit a set of information about complex processes, events or trends, showing up as an important monitoring tool and aid in the environmental assessment, helping to identify the progress of it and to chart future actions, contributing, so for decision making. The World Tourism Organization - WTO recognizes the importance of indicators to appraise the tourism activity in the point of view of sustainability, launching in 1995 eleven Keys Indicators of Sustainable Tourism to assist in the monitoring of tourist destinations. So we propose a case study to examine the applicability or otherwise of a monitoring methodology and aid in the understanding of tourism sustainability, analyzing the effectiveness of local indicators on the approach defined by the WTO. The study was applied to the Lagoa da Confusão City, in the state of Tocantins - North Brazil. The case study was carried out in 2006/2007, with the guiding deductive method. The indicators were measured by specific methodologies adapted to the study site, so that could generate quantitative results which could be analyzed at the proposed scale WTO (0 to 10 points). Applied indicators: Attractive Protection – AP (level of a natural and cultural attractive protection), Sociocultural Impact–SI (level of socio-cultural impacts), Waste Management - WM (level of management of solid waste generated), Planning Process-PP (trip planning level) Tourist Satisfaction-TS (satisfaction of the tourist experience), Community Satisfaction-CS (satisfaction of the local community with the development of local tourism) and Tourism Contribution to the Local Economy-TCLE (tourist level of contribution to the local economy). The city of Lagoa da Confusão was presented as an important object of study for the methodology in question, as offered condition to analyze the indicators and the complexities that arose during the research. The data collected can help discussions on the sustainability of tourism in the destination. The indicators TS, CS, WM , PP and AP appeared as satisfactory as allowed the measurement "translating" the reality under study, unlike TCLE and the SI indicators that were not seen as reliable and clear and should be reviewed and discussed for an adaptation and replication of the same. The application and study of various indicators of sustainable tourism, give better able to analyze the local tourism situation than monitor only one of the indicators, it does not demonstrate all collected data, which could result in a superficial analysis of the tourist destination.Keywords: indicators, Lagoa da Confusão, Tocantins, Brazil, monitoring, sustainability
Procedia PDF Downloads 401311 Personal Exposure to Respirable Particles and Other Selected Gases among Cyclists near and Away from Busy Roads of Perth Metropolitan Area
Authors: Anu Shrestha, Krassi Rumchev, Ben Mullins, Yun Zhao, Linda Selvey
Abstract:
Cycling is often promoted as a means of reducing vehicular congestion, noise and greenhouse gas and air pollutant emissions in urban areas. It is also indorsed as a healthy means of transportation in terms of reducing the risk of developing a range of physical and psychological conditions. However, people who cycle regularly may not be aware that they can become exposed to high levels of Vehicular Air Pollutants (VAP) emitted by nearby traffics and therefore experience adverse health effects as a result. The study will highlight the present scenario of ambient air pollution level in different cycling routes in Perth and also highlight significant contribution to the understanding of health risks that cyclist may face from exposure to particulate air pollution. Methodology: This research was conducted in Perth, Western Austral and consisted of two groups of cyclists cycling near high (2 routes) and low (two routes) vehicular traffic roads, at high and low levels of exertion, during the cold and warm seasons. A sample size of 123 regular cyclists who cycled at least 80 km/week, aged 20-55, and non-smoker were selected for this study. There were altogether 100 male and 23 female who were asked to choose one or more routes among four different routes, and each participant cycled the route for warm or cold or both seasons. Cyclist who reported cardiovascular and other chronic health conditions (excluding asthma) were not invited into the study. Exposures to selected air pollutants were assessed by undertaking background and personal measurements alone with the measurement of heart and breathe rate of each participant. Finding: According to the preliminary study findings, the cyclists who used cycling route close to high traffic route were exposed to higher levels of measured air pollutants Nitrogen Oxide (NO₂) =0.12 ppm, sulfur dioxide (SO₂)=0.06 ppm and carbon monoxide (CO)=0.25 PPM compared to those who cycled away from busy roads. However, we measured high concentrations of particulate air pollution near one of the low traffic route which we associate with the close proximity to ferry station. Concluding Statement: As a conclusion, we recommend that cycling routes should be selected away from high traffic routes. If possible, we should also consider that if the cycling route is surrounded by the dense populated infrastructures, it can trap the pollutants and always facilitate in increasing inhalation of particle count among the cyclists.Keywords: air pollution, carbon monoxide, cyclists' health, nitrogen dioxide, nitrogen oxide, respirable particulate matters
Procedia PDF Downloads 263310 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 223