Search results for: big data in higher education
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 35179

Search results for: big data in higher education

24559 Thermal Imaging of Aircraft Piston Engine in Laboratory Conditions

Authors: Lukasz Grabowski, Marcin Szlachetka, Tytus Tulwin

Abstract:

The main task of the engine cooling system is to maintain its average operating temperatures within strictly defined limits. Too high or too low average temperatures result in accelerated wear or even damage to the engine or its individual components. In order to avoid local overheating or significant temperature gradients, leading to high stresses in the component, the aim is to ensure an even flow of air. In the case of analyses related to heat exchange, one of the main problems is the comparison of temperature fields because standard measuring instruments such as thermocouples or thermistors only provide information about the course of temperature at a given point. Thermal imaging tests can be helpful in this case. With appropriate camera settings and taking into account environmental conditions, we are able to obtain accurate temperature fields in the form of thermograms. Emission of heat from the engine to the engine compartment is an important issue when designing a cooling system. Also, in the case of liquid cooling, the main sources of heat in the form of emissions from the engine block, cylinders, etc. should be identified. It is important to redesign the engine compartment ventilation system. Ensuring proper cooling of aircraft reciprocating engine is difficult not only because of variable operating range but mainly because of different cooling conditions related to the change of speed or altitude of flight. Engine temperature also has a direct and significant impact on the properties of engine oil, which under the influence of this parameter changes, in particular, its viscosity. Too low or too high, its value can be a result of fast wear of engine parts. One of the ways to determine the temperatures occurring on individual parts of the engine is the use of thermal imaging measurements. The article presents the results of preliminary thermal imaging tests of aircraft piston diesel engine with a maximum power of about 100 HP. In order to perform the heat emission tests of the tested engine, the ThermaCAM S65 thermovision monitoring system from FLIR (Forward-Looking Infrared) together with the ThermaCAM Researcher Professional software was used. The measurements were carried out after the engine warm up. The engine speed was 5300 rpm The measurements were taken for the following environmental parameters: air temperature: 17 °C, ambient pressure: 1004 hPa, relative humidity: 38%. The temperatures distribution on the engine cylinder and on the exhaust manifold were analysed. Thermal imaging tests made it possible to relate the results of simulation tests to the real object by measuring the rib temperature of the cylinders. The results obtained are necessary to develop a CFD (Computational Fluid Dynamics) model of heat emission from the engine bay. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: aircraft, piston engine, heat, emission

Procedia PDF Downloads 107
24558 Associated Factors the Safety of the Patient in Hemodialysis Clinics of a Brazilian Municipality: Cross-Sectional Study

Authors: Magda Milleyde de Sousa Lima, Letícia Lima Aguiar, Marina Guerra Martins, Erika Veríssimo Dias Sousa, Lizandra Sampaio de Oliveira, Lívia Moreira Barros, Joselany Áfio Caetano

Abstract:

Patients with chronic kidney disease are vulnerable to episodes which make the safety of their health vulnerable, mainly due to the treatment process that exposes them to high rates of interventions during hemodialysis sessions. Some factors associated with health care contribute to the risk of death and complications. However, there are a small number of scientific studies evaluating the level of safety of hemodialysis clinics, and the sociodemographic characteristics of patients and professionals associated with this safety. Therefore, the present study aims to examine the level of patient safety in hemodialysis clinics in the Brazilian capital, to identify the sociodemographic and clinical factors of patients and nursing staff associated with the level of safety. This is an observational, descriptive and quantitative research conducted in three hemodialysis clinics placed in the city of Fortaleza-CE, Brazil, from September to November 2019. The sample was formed after a sample calculation for finite inhabitants of correlation with 200 chronic renal patients, 30 nursing technicians and seven nurses. Conventional sampling was used based on the inclusion criteria: being present at the hemodialysis session on the day the researcher performed the data collection and being 18 years of age or older. Participants who presented communication difficulties to listen to and/or answer the sociodemographic and clinical questionnaire were excluded. Two instruments were applied: sociodemographic and clinical characterization form and Chronic Renal Patient Safety Assessment Scale on Hemodialysis (EASPRCH). The data were analyzed using the Kruskal Walls Test for categorical variables and Spearman correlation coefficient for non-categorical variables, using the Statistical Package SPSS version 20.0. The present study respected the ethical and legal principles determined by resolution 466/2012 of the National Health Council, under the approval of the Ethics and Research Committee with an opinion number: 3,255,635. The results showed that a hemodialysis clinic presented unsafe care practices of 32 points in the EASPRCH (p=0.001). A statistical association was identified between the level of safety and the variables of the patients: level of education (p=0.018), family income (p=0.049), type of employment (p=0.012), venous access site (p=0.009), use of medication during the session (p=0.008) and time of hemodialysis (p=0.002). When evaluating the profile of nurses, a statistical association was evidenced between the level of safety with the variables: marital status (p=0.000), race (p=0.017), schooling (p= 0.000), income (p=0.013), age (p=0.000), clinic workload (p=0.000), time working with hemodialysis (p=0.000), time working in the clinic (p= 0.007) and clinic sizing (p=0.000). In order, the sociodemographic factors of nursing technicians associated with the level of patient safety were: race (p= 0.001) and weekly workload at (p=0.010). Therefore, it is concluded that there is a non-conformity in the level of patient safety in one of the clinics studied and, that sociodemographic and clinical factors of patients and health professionals corroborate the level of safety of the health unit.

Keywords: hemodialysis, nursing, patient safety, quality improvement

Procedia PDF Downloads 185
24557 Performance, Yolk and Serum Cholesterol of Shaver-Brown Layers Fed Moringa Leaf Meal and Sun Dried Garlic Powder

Authors: Anselm Onyimonyi, A. Abaponitus

Abstract:

One hundred and ninety two Shaver-Brown layers aged 40 weeks were used in a 10 weeks feeding trial to investigate the effect of supplementary moringa leaf meal and sun-dried garlic powder (MOGA) on the performance, egg yolk and serum cholesterol profiles of the birds. The birds were randomly assigned to four treatments in a 2 x 2 factorial in a Completely Randomized Design with 48 birds per treatment. Each treatment had 24 replicates with 2 birds, each separately housed in a cell in a battery cage. Birds on treatment 1 received a standard layers mash (16.5% CP and 3000 kcalME/kg) without any MOGA. Treatment 2 birds received the control diet with 5 g moringa leaf meal/kg of feed, treatment 3 received the control diet with 5 g sun-dried garlic powder/kg of feed, treatment 4 had a combination of 5 g each of moringa leaf meal and sun dried garlic powder/kg of feed. Data were kept on daily egg production, egg weight and feed intake. 10 eggs were collected per treatment at the end of the study for yolk cholesterol determination. Blood samples from four birds per treatment were collected and used for the serum cholesterol and triglycerides determination. Results showed that bird on treatment 3 (5% moringa leaf meal/kg of feed) had significantly higher (P < 0.05) Hen Day Egg Production record of 83.3% as against 78.75%, 65.05% and 66.67% recorded for the control, T2 and T4 birds, respectively. Egg weight of 56.39 g recorded for the same birds on treatment 3 was significantly (P< 0.05) lower than the values of 62.61 g, 60.99 g and 59.33 g recorded for birds on T4, T1 and T2, respectively. Yolk and serum cholesterol profiles of the moringa leaf meal fed birds were significantly (P<0.05) lowered when compared to those of the other treatments. Comparatively, the birds on the MOGA diets had significantly reduced yolk and serum cholesterol than the control. It is concluded that supplementation of moringa leaf meal and sun dried garlic powder at the levels used in this study will result in the production of nutritionally healthier eggs with less yolk and serum cholesterol.

Keywords: performance, cholesterol, moringa, garlic

Procedia PDF Downloads 502
24556 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function

Procedia PDF Downloads 292
24555 Impacto of Communism Policy on Religion Identity in Pogradec District, Albania

Authors: Gjergji Buzo

Abstract:

This paper presents the communist policy about tangible and intangible religious heritage in Pogradec District, Albania. The district of Pogradec lies in the southeast of Albania and consists of the municipality, located on the shore of Ohrid Lake, and 7 Administrative Units, with a population of about 61,530 inhabitants. From the statistical data provided by the Institute of Statistics, the city of Pogradec has 55.9% Muslims, 19.9% Orthodox, 1.4% Catholic and 1.1% Bektashi. While the religious affiliation in the Administrative Unit is as follows: Muslim 72.1%, Orthodox 3.32%, Catholic 1.18%, Bektashi 0.2%. The percentages are approximate values, taking into consideration that 13.8% of the total population preferred not to answer the question on religion and that for 2.4% of the persons who answered, the information provided was not relevant or stated. The percentage of the persons who declared themselves as believers without belonging to any religion was 5.5 and the persons who declared themselves as a non-believer and not belonging to any religion was 2.5. Number of persons who declared themselves as evangelists was 0.1% and the number of them declared as "other Christians" was 0.1%. About 80% of the population believe in God, and most of them practice one of the monotheist religions. We have divided religious practices into three major periods. The first is until 1967, when different religions were practiced in Pogradec in harmony with each other; the second is the period 1967-1990, during which the practice of religion was prohibited, and the period after 1990, when religious freedom was restored. This article is focused on the communist period 1967-1990 when Albania (and Pogradec as part of it) became the only atheist country in the world. The object of the study is the impact of these policies on spiritual and material religious identity. The communist regime destroyed or transformed the religious objects, whether Islamic or Christian and prohibited practicing religious rituals in Albania. They followed an education policy with an atheistic spirituality among young people, characterizing religion as opium for the people. All these left traces on the people and brought a deformation of the religious identity. In order to better understand the reality of that time and how this policy was experienced by the people, we conducted a survey in Pogradect District with the participation of 1000 people.

Keywords: communism policy, heritage, identity, religion, statistics, survey

Procedia PDF Downloads 54
24554 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries

Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman

Abstract:

There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.

Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems

Procedia PDF Downloads 133
24553 Parabolic Impact Law of High Frequency Exchanges on Price Formation in Commodities Market

Authors: L. Maiza, A. Cantagrel, M. Forestier, G. Laucoin, T. Regali

Abstract:

Evaluation of High Frequency Trading (HFT) impact on financial markets is very important for traders who use market analysis to detect winning transaction opportunity. Analysis of HFT data on tobacco commodity market is discussed here and interesting linear relationship has been shown between trading frequency and difference between averaged trading prices above and below considered trading frequency. This may open new perspectives on markets data understanding and could provide possible interpretation of Adam Smith invisible hand.

Keywords: financial market, high frequency trading, analysis, impacts, Adam Smith invisible hand

Procedia PDF Downloads 344
24552 Design Thinking Activities: A Tool in Overcoming Student Reticence

Authors: Marinel Dayawon

Abstract:

Student participation in classroom activities is vital in the teaching- learning the process as it develops self-confidence, social relationships and good academic performance of students. It is the teacher’s empathetic manner and creativity to create solutions that encourage teamwork and mutual support while dropping the academic competition within the class that hinder every shy student to walk with courage and talk with conviction because they consider their ideas, weak, as compared to the bright students. This study aimed to explore the different design thinking strategies that will change the mindset of shy students in classroom activities, maximizing their participation in all given tasks while sharing their views through ideation and providing them a wider world through compromise agreement within the members of the group, sensitivity to one’s idea, thus, arriving at a collective decision in the development of a prototype that indicates improvement in their classroom involvement. The study used the qualitative type of research. Triangulation is done through participant observation, focus group discussion and interview, documented through photos and videos. The respondents were the second- year Bachelor of Secondary Education students of the Institute of Teacher Education at Isabela State University- Cauayan City Campus. The result of the study revealed that reticent students when involved in game activities through a slap and tap method, writing their clustered ideas, using sticky notes is excited in sharing ideas as it doesn’t use oral communication. It is also observed after three weeks of using the design thinking strategies; shy students volunteer as secretary, rapporteur or group leader in the team- building activities as it represents the ideas of the heterogeneous group, removing the individual identity of the ideas. Superior students learned to listen to the ideas of the reticent students and involved them in the prototyping process of designing a remediation program for high school students showing reticence in the classroom, making their experience as a benchmark. The strategies made a 360- degrees transformation of the shy students, producing their journal log, in their journey to being open. Thus, faculty members are now adopting the design thinking approach.

Keywords: design thinking activities, qualitative, reticent students, Isabela, Philippines

Procedia PDF Downloads 211
24551 The Determinants of Financial Ratio Disclosures and Quality: Evidence from an Emerging Market

Authors: Ben Kwame Agyei-Mensah

Abstract:

This study investigated the influence of firm-specific characteristics which include proportion of Non-Executive Directors, ownership concentration, firm size, profitability, debt equity ratio, liquidity and leverage on the extent and quality of financial ratios disclosed by firms listed on the Ghana Stock Exchange. The research was conducted through detailed analysis of the 2012 financial statements of the listed firms. Descriptive analysis was performed to provide the background statistics of the variables examined. This was followed by regression analysis which forms the main data analysis. The results of the extent of financial ratio disclosure level, mean of 62.78%, indicate that most of the firms listed on the Ghana Stock Exchange did not overwhelmingly disclose such ratios in their annual reports. The results of the low quality of financial ratio disclosure mean of 6.64% indicate that the disclosures failed woefully to meet the International Accounting Standards Board's qualitative characteristics of relevance, reliability, comparability and understandability. The results of the multiple regression analysis show that leverage (gearing ratio) and return on investment (dividend per share) are associated on a statistically significant level as far as the extent of financial ratio disclosure is concerned. Board ownership concentration and proportion of (independent) non-executive directors, on the other hand were found to be statistically associated with the quality of financial ratio disclosed. There is a significant negative relationship between ownership concentration and the quality of financial ratio disclosure. This means that under a higher level of ownership concentration less quality financial ratios are disclosed. The findings also show that there is a significant positive relationship between board composition (proportion of non-executive directors) and the quality of financial ratio disclosure.

Keywords: voluntary disclosure, firm-specific characteristics, financial reporting, financial ratio disclosure, Ghana stock exchange

Procedia PDF Downloads 575
24550 Identifying Game Variables from Students’ Surveys for Prototyping Games for Learning

Authors: N. Ismail, O. Thammajinda, U. Thongpanya

Abstract:

Games-based learning (GBL) has become increasingly important in teaching and learning. This paper explains the first two phases (analysis and design) of a GBL development project, ending up with a prototype design based on students’ and teachers’ perceptions. The two phases are part of a full cycle GBL project aiming to help secondary school students in Thailand in their study of Comprehensive Sex Education (CSE). In the course of the study, we invited 1,152 students to complete questionnaires and interviewed 12 secondary school teachers in focus groups. This paper found that GBL can serve students in their learning about CSE, enabling them to gain understanding of their sexuality, develop skills, including critical thinking skills and interact with others (peers, teachers, etc.) in a safe environment. The objectives of this paper are to outline the development of GBL variables from the research question(s) into the developers’ flow chart, to be responsive to the GBL beneficiaries’ preferences and expectations, and to help in answering the research questions. This paper details the steps applied to generate GBL variables that can feed into a game flow chart to develop a GBL prototype. In our approach, we detailed two models: (1) Game Elements Model (GEM) and (2) Game Object Model (GOM). There are three outcomes of this research – first, to achieve the objectives and benefits of GBL in learning, game design has to start with the research question(s) and the challenges to be resolved as research outcomes. Second, aligning the educational aims with engaging GBL end users (students) within the data collection phase to inform the game prototype with the game variables is essential to address the answer/solution to the research question(s). Third, for efficient GBL to bridge the gap between pedagogy and technology and in order to answer the research questions via technology (i.e. GBL) and to minimise the isolation between the pedagogists “P” and technologist “T”, several meetings and discussions need to take place within the team.

Keywords: games-based learning, engagement, pedagogy, preferences, prototype

Procedia PDF Downloads 156
24549 Suitability Evaluation of Human Settlements Using a Global Sensitivity Analysis Method: A Case Study in of China

Authors: Feifei Wu, Pius Babuna, Xiaohua Yang

Abstract:

The suitability evaluation of human settlements over time and space is essential to track potential challenges towards suitable human settlements and provide references for policy-makers. This study established a theoretical framework of human settlements based on the nature, human, economy, society and residence subsystems. Evaluation indicators were determined with the consideration of the coupling effect among subsystems. Based on the extended Fourier amplitude sensitivity test algorithm, the global sensitivity analysis that considered the coupling effect among indicators was used to determine the weights of indicators. The human settlement suitability was evaluated at both subsystems and comprehensive system levels in 30 provinces of China between 2000 and 2016. The findings were as follows: (1) human settlements suitability index (HSSI) values increased significantly in all 30 provinces from 2000 to 2016. Among the five subsystems, the suitability index of the residence subsystem in China exhibited the fastest growinggrowth, fol-lowed by the society and economy subsystems. (2) HSSI in eastern provinces with a developed economy was higher than that in western provinces with an underdeveloped economy. In con-trast, the growing rate of HSSI in eastern provinces was significantly higher than that in western provinces. (3) The inter-provincial difference of in HSSI decreased from 2000 to 2016. For sub-systems, it decreased for the residence system, whereas it increased for the economy system. (4) The suitability of the natural subsystem has become a limiting factor for the improvement of human settlements suitability, especially in economically developed provinces such as Beijing, Shanghai, and Guangdong. The results can be helpful to support decision-making and policy for improving the quality of human settlements in a broad nature, human, economy, society and residence context.

Keywords: human settlements, suitability evaluation, extended fourier amplitude, human settlement suitability

Procedia PDF Downloads 61
24548 Comparison of Water Curing and Carbonation Curing on Mortar Mix Incorporating Cement Kiln Dust

Authors: Devender Sharma, Shweta Goyal

Abstract:

Sustainable development is a key to protect the environment for a secure future. Accelerated carbonation curing is a comparatively new technique for curing of concrete which involves sequestration of carbon dioxide gas into the precast concrete, resulting in improvement of the properties of concrete. This paper presents the results of a study to evaluate the effect of carbonation curing on cement mortars incorporating cement kiln dust (CKD) as partial replacement of cement. The mortar specimens were prepared by replacing cement with CKD in varying percentages of 0-50% by the weight of cement. The specimens were subjected to 12 hour carbonation curing, followed by sealed packing till testing age. The results were compared with the normal curing procedure, in which the specimens were water cured till the testing age. Compressive strength and microstructure of the mix were studied. It was noted that on increasing the percentage of CKD up to 10% by the weight of the cement, no considerable change was observed in the compressive strength. But as the percentage of CKD was further increased, there was a decrease in compressive strength, with strength decreasing up to 40% when 50% of the cement was replaced with CKD. The decrease in strength is due to the lesser lime content in CKD as compared to cement. High ettringite formation was observed in mixes with high percentages of CKD, thus indicating a decrease in the compressive strength. With carbonation curing, an early age strength gain was observed in mortars, even with higher percentages of CKD. The early strength of the carbonation cured mixes was found to be greater than water cured mixes irrespective of the percentage of CKD. 7 days and 28 days compressive strength of the mix was comparable for both the carbonation cured and water cured specimen. The increase in compressive strength can be attributed to the conversion of unstable Ca(OH)2 into stable CaCO3, which causes densification of the mix. CaCO3 precipitation and greater CSH gel formation was clearly observed in the SEM images of carbonation cured specimen, indicating higher compressive strength. Thus, carbonation curing can be used as an efficient method to enhance the properties of concrete.

Keywords: carbonation, cement kiln dust, compressive strength, microstructure

Procedia PDF Downloads 215
24547 Non-linear Model of Elasticity of Compressive Strength of Concrete

Authors: Charles Horace Ampong

Abstract:

Non-linear models have been found to be useful in modeling the elasticity (measure of degree of responsiveness) of a dependent variable with respect to a set of independent variables ceteris paribus. This constant elasticity principle was applied to the dependent variable (Compressive Strength of Concrete in MPa) which was found to be non-linearly related to the independent variable (Water-Cement ratio in kg/m3) for given Ages of Concrete in days (3, 7, 28) at different levels of admixtures Superplasticizer (in kg/m3), Blast Furnace Slag (in kg/m3) and Fly Ash (in kg/m3). The levels of the admixtures were categorized as: S1=Some Plasticizer added & S0=No Plasticizer added; B1=some Blast Furnace Slag added & B0=No Blast Furnace Slag added; F1=Some Fly Ash added & F0=No Fly Ash added. The number of observations (samples) used for the research was one-hundred and thirty-two (132) in all. For Superplasticizer, it was found that Compressive Strength of Concrete was more elastic with regards to Water-Cement ratio at S1 level than at S0 level for the given ages of concrete 3, 7and 28 days. For Blast Furnace Slag, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for concrete ages 3, 7 and 28 days. For Fly Ash, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for Ages 3, 7 and 28 days. The research also tested for different combinations of the levels of Superplasticizer, Blast Furnace Slag and Fly Ash. It was found that Compressive Strength elasticity with regards to Water-Cement ratio was lowest (Elasticity=-1.746) with a combination of S0, B0 and F0 for concrete age of 3 days. This was followed by Elasticity of -1.611 with a combination of S0, B0 and F0 for a concrete of age 7 days. Next, the highest was an Elasticity of -1.414 with combination of S0, B0 and F0 for a concrete age of 28 days. Based on preceding outcomes, three (3) non-linear model equations for predicting the output elasticity of Compressive Strength of Concrete (in %) or the value of Compressive Strength of Concrete (in MPa) with regards to Water to Cement was formulated. The model equations were based on the three different ages of concrete namely 3, 7 and 28 days under investigation. The three models showed that higher elasticity translates into higher compressive strength. And the models revealed a trend of increasing concrete strength from 3 to 28 days for a given amount of water to cement ratio. Using the models, an increasing modulus of elasticity from 3 to 28 days was deduced.

Keywords: concrete, compressive strength, elasticity, water-cement

Procedia PDF Downloads 282
24546 Environmental and Socioeconomic Determinants of Climate Change Resilience in Rural Nigeria: Empirical Evidence towards Resilience Building

Authors: Ignatius Madu

Abstract:

The study aims at assessing the environmental and socioeconomic determinants of climate change resilience in rural Nigeria. This is necessary because researches and development efforts on building climate change resilience of rural areas in developing countries are usually made without the knowledge of the impacts of the inherent rural characteristics that determine resilient capacities of the households. This has, in many cases, led to costly mistakes, delayed responses, inaccurate outcomes, and other difficulties. Consequently, this assessment becomes crucial not only to policymakers and people living in risk-prone environments in rural areas but also to fill the research gap. To achieve the aim, secondary data were obtained from the Annual Abstract of Statistics 2017, LSMS-Integrated Surveys on Agriculture and General Household Survey Panel 2015/2016, and National Agriculture Sample Survey (NASS), 2010/2011.Resilience was calculated by weighting and adding the adaptive, absorptive and anticipatory measures of households variables aggregated at state levels and then regressed against rural environmental and socioeconomic characteristics influencing it. From the regression, the coefficients of the variables were used to compute the impacts of the variables using the Stochastic Regression of Impacts on Population, Affluence and Technology (STIRPAT) Model. The results showed that the northern States are generally low in resilient indices and are impacted less by the development indicators. The major determining factors are percentage of non-poor, environmental protection, road transport development, landholding, agricultural input, population density, dependency ratio (inverse), household asserts, education and maternal care. The paper concludes that any effort to a successful resilient building in rural areas of the country should first address these key factors that enhance rural development and wellbeing since it is better to take action before shocks take place.

Keywords: climate change resilience; spatial impacts; STIRPAT model; Nigeria

Procedia PDF Downloads 135
24545 Translanguaging as a Decolonial Move in South African Bilingual Classrooms

Authors: Malephole Philomena Sefotho

Abstract:

Nowadays, it is a fact that the majority of people, worldwide, are bilingual rather than monolingual due to the surge of globalisation and mobility. Consequently, bilingual education is a topical issue of discussion among researchers. Several studies that have focussed on it have highlighted the importance and need for incorporating learners’ linguistic repertoires in multilingual classrooms and move away from the colonial approach which is a monolingual bias – one language at a time. Researchers pointed out that a systematic approach that involves the concurrent use of languages and not a separation of languages must be implemented in bilingual classroom settings. Translanguaging emerged as a systematic approach that assists learners to make meaning of their world and it involves allowing learners to utilize all their linguistic resources in their classrooms. The South African language policy also room for diverse languages use in bi/multilingual classrooms. This study, therefore, sought to explore how teachers apply translanguaging in bilingual classrooms in incorporating learners’ linguistic repertoires. It further establishes teachers’ perspectives in the use of more than one language in teaching and learning. The participants for this study were language teachers who teach at bilingual primary schools in Johannesburg in South Africa. Semi-structured interviews were conducted to establish their perceptions on the concurrent use of languages. Qualitative research design was followed in analysing data. The findings showed that teachers were reluctant to allow translanguaging to take place in their classrooms even though they realise the importance thereof. Not allowing bilingual learners to use their linguistic repertoires has resulted in learners’ negative attitude towards their languages and contributed in learners’ loss of their identity. This article, thus recommends a drastic change to decolonised approaches in teaching and learning in multilingual settings and translanguaging as a decolonial move where learners are allowed to translanguage freely in their classroom settings for better comprehension and making meaning of concepts and/or related ideas. It further proposes continuous conversations be encouraged to bring eminent cultural and linguistic genocide to a halt.

Keywords: bilingualism, decolonisation, linguistic repertoires, translanguaging

Procedia PDF Downloads 160
24544 Investigating Sub-daily Responses of Water Flow of Trees in Tropical Successional Forests in Thailand

Authors: Pantana Tor-Ngern

Abstract:

In the global water cycle, tree water use (Tr) largely contributes to evapotranspiration which is the total amount of water evaporated from terrestrial ecosystems to the atmosphere, regulating climates. Tree water use responds to environmental factors, including atmospheric humidity and sunlight (represented by vapor pressure deficit or VPD and photosynthetically active radiation or PAR, respectively) and soil moisture. In forests, Tr responses to such factors depend on species and their spatial and temporal variations. Tropical forests in Southeast Asia (SEA) have experienced land-use conversion from abandoned agricultural practices, resulting in patches of forests at different stages including old-growth and secondary forests. Because the inherent structures, such as canopy height and tree density, significantly vary among forests at different stages and can strongly affect their respective microclimate, Tr and its responses to changing environmental conditions in successional forests may differ. Daily and seasonal variations in the environmental factors may exert significant impacts on the respective Tr patterns. Extrapolating Tr data from short periods of days to longer periods of seasons or years can be complex and is important for estimating long-term ecosystem water use which often includes normal and abnormal climatic conditions. Thus, this study aims to investigate the diurnal variation of Tr, using measured sap flux density (JS) data, with changes in VPD in eight evergreen tree species in an old-growth forest (hereafter OF; >200 years old) and a young forest (hereafter YF, <10 years old) in Khao Yai National Park, Thailand. The studied species included Sysygium syzygoides, Aquilaria crassna, Cinnamomum subavenium, Nephelium melliferum, Altingia excelsa in OF, and Syzygium nervosum and Adinandra integerrima in YF. Only Sysygium antisepticum was found in both forest stages. Specifically, hysteresis, which indicates the asymmetrical changes of JS in response to changing VPD across daily timescale, was examined in these species. Results showed no hysteresis in all species in OF, except Altingia excelsa which exhibited a 3-hour delayed JS response to VPD. In contrast, JS of all species in YF displayed one-hour delayed responses to VPD. The OF species that showed no hysteresis indicated their well-coupling of their canopies with the atmosphere, facilitating the gas exchange which is essential for tree growth. The delayed responses in Altingia excelsa in OF and all species in YF were associated with higher JS in the morning than that in the afternoon. This implies that these species were sensitive to drying air, closing stomata relatively rapidly compared to the decreasing atmospheric humidity (VPD). Such behavior is often observed in trees growing in dry environments. This study suggests that detailed investigation of JS at sub-daily timescales is imperative for better understanding of mechanistic responses of trees to the changing climate, which will benefit the improvement of earth system models.

Keywords: sap flow, tropical forest, forest succession, thermal dissipcation probe

Procedia PDF Downloads 46
24543 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 68
24542 Comparison of Cu Nanoparticle Formation and Properties with and without Surrounding Dielectric

Authors: P. Dubcek, B. Pivac, J. Dasovic, V. Janicki, S. Bernstorff

Abstract:

When grown only to nanometric sizes, metallic particles (e.g. Ag, Au and Cu) exhibit specific optical properties caused by the presence of plasmon band. The plasmon band represents collective oscillation of the conduction electrons, and causes a narrow band absorption of light in the visible range. When the nanoparticles are embedded in a dielectric, they also cause modifications of dielectrics optical properties. This can be fine-tuned by tuning the particle size. We investigated Cu nanoparticle growth with and without surrounding dielectric (SiO2 capping layer). The morphology and crystallinity were investigated by GISAXS and GIWAXS, respectively. Samples were produced by high vacuum thermal evaporation of Cu onto monocrystalline silicon substrate held at room temperature, 100°C or 180°C. One series was in situ capped by 10nm SiO2 layer. Additionally, samples were annealed at different temperatures up to 550°C, also in high vacuum. The room temperature deposited samples annealed at lower temperatures exhibit continuous film structure: strong oscillations in the GISAXS intensity are present especially in the capped samples. At higher temperatures enhanced surface dewetting and Cu nanoparticles (nanoislands) formation partially destroy the flatness of the interface. Therefore the particle type of scattering is enhanced, while the film fringes are depleted. However, capping layer hinders particle formation, and continuous film structure is preserved up to higher annealing temperatures (visible as strong and persistent fringes in GISAXS), compared to the non- capped samples. According to GISAXS, lateral particle sizes are reduced at higher temperatures, while particle height is increasing. This is ascribed to close packing of the formed particles at lower temperatures, and GISAXS deduced sizes are partially the result of the particle agglomerate dimensions. Lateral maxima in GISAXS are an indication of good positional correlation, and the particle to particle distance is increased as the particles grow with temperature elevation. This coordination is much stronger in the capped and lower temperature deposited samples. The dewetting is much more vigorous in the non-capped sample, and since nanoparticles are formed in a range of sizes, correlation is receding both with deposition and annealing temperature. Surface topology was checked by atomic force microscopy (AFM). Capped sample's surfaces were smoother and lateral size of the surface features were larger compared to the non-capped samples. Altogether, AFM results suggest somewhat larger particles and wider size distribution, and this can be attributed to the difference in probe size. Finally, the plasmonic effect was monitored by UV-Vis reflectance spectroscopy, and relative weak plasmonic effect could be explained by uncomplete dewetting or partial interconnection of the formed particles.

Keywords: coper, GISAXS, nanoparticles, plasmonics

Procedia PDF Downloads 110
24541 Evaluate the Changes in Stress Level Using Facial Thermal Imaging

Authors: Amin Derakhshan, Mohammad Mikaili, Mohammad Ali Khalilzadeh, Amin Mohammadian

Abstract:

This paper proposes a stress recognition system from multi-modal bio-potential signals. For stress recognition, Support Vector Machines (SVM) and LDA are applied to design the stress classifiers and its characteristics are investigated. Using gathered data under psychological polygraph experiments, the classifiers are trained and tested. The pattern recognition method classifies stressful from non-stressful subjects based on labels which come from polygraph data. The successful classification rate is 96% for 12 subjects. It means that facial thermal imaging due to its non-contact advantage could be a remarkable alternative for psycho-physiological methods.

Keywords: stress, thermal imaging, face, SVM, polygraph

Procedia PDF Downloads 470
24540 Entrepreneurs’ Perceptions of the Economic, Social and Physical Impacts of Tourism

Authors: Oktay Emir

Abstract:

The objective of this study is to determine how entrepreneurs perceive the economic, social and physical impacts of tourism. The study was conducted in the city of Afyonkarahisar, Turkey, which is rich in thermal tourism resources and investments. A survey was used as the data collection method, and the questionnaire was applied to 472 entrepreneurs. A simple random sampling method was used to identify the sample. Independent sampling t-tests and ANOVA tests were used to analyse the data obtained. Additionally, some statistically significant differences (p<0.05) were found based on the participants’ demographic characteristics regarding their opinions about the social, economic and physical impacts of tourism activities.

Keywords: tourism, perception, entrepreneurship, entrepreneurs, structural equation modelling

Procedia PDF Downloads 438
24539 Conceptual Design of Gravity Anchor Focusing on Anchor Towing and Lowering

Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg

Abstract:

Wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis from the oil and gas industry. The unfolding of Universal heavyweight gravity anchor (UGA) for floating based foundation for floating Tension Leg Platform (TLP) sub-structures is developed in this research work. It is funded by the German Federal Ministry of Education and Research) for a three-year (2019-2022) research program called “Offshore Wind Solutions Plus (OWSplus) - Floating Offshore Wind Solutions Mecklenburg-Vorpommern.” It’s a group consists of German institutions (Universities, laboratories, and consulting companies). The part of the project is focused on the numerical modeling of gravity anchor that involves to analyze and solve fluid flow problems. Compared to gravity-based torpedo anchors, these UGA will be towed and lowered via controlled machines (tug boats) at lower speeds. This kind of installation of UGA are new to the offshore wind industry, particularly for TLP, and very few research works have been carried out in recent years. Conventional methods for transporting the anchor requires a large transportation crane vessel which involves a greater cost. This conceptual UGA anchors consists of ballasting chambers which utilizes the concept of buoyancy forces; the inside chambers are filled with the required amount of water in a way that they can float on the water for towing. After reaching the installation site, those chambers are ballasted with water for lowering. After it’s lifetime, these UGA can be unballasted (for erection or replacement) results in self-rising to the sea surface; buoyancy chambers give an advantage for using an UGA without the need of heavy machinery. However, while lowering/rising the UGA towards/away from the seabed, it experiences difficult, harsh marine environments due to the interaction of waves and currents. This leads to drifting of the anchor from the desired installation position and damage to the lowering machines. To overcome such harsh environments problems, a numerical model is built to investigate the influences of different outer contours and other fluid governing shapes that can be installed on the UGA to overcome the turbulence and drifting. The presentation will highlight the importance of the Computational Fluid Dynamics (CFD) numerical model in OpenFOAM, which is open-source programming software.

Keywords: anchor lowering, towing, waves, currrents, computational fluid dynamics

Procedia PDF Downloads 153
24538 A Thermo-mechanical Finite Element Model to Predict Thermal Cycles and Residual Stresses in Directed Energy Deposition Technology

Authors: Edison A. Bonifaz

Abstract:

In this work, a numerical procedure is proposed to design dense multi-material structures using the Directed Energy Deposition (DED) process. A thermo-mechanical finite element model to predict thermal cycles and residual stresses is presented. A numerical layer build-up procedure coupled with a moving heat flux was constructed to minimize strains and residual stresses that result in the multi-layer deposition of an AISI 316 austenitic steel on an AISI 304 austenitic steel substrate. To simulate the DED process, the automated interface of the ABAQUS AM module was used to define element activation and heat input event data as a function of time and position. Of this manner, the construction of ABAQUS user-defined subroutines was not necessary. Thermal cycles and thermally induced stresses created during the multi-layer deposition metal AM pool crystallization were predicted and validated. Results were analyzed in three independent metal layers of three different experiments. The one-way heat and material deposition toolpath used in the analysis was created with a MatLab path script. An optimal combination of feedstock and heat input printing parameters suitable for fabricating multi-material dense structures in the directed energy deposition metal AM process was established. At constant power, it can be concluded that the lower the heat input, the lower the peak temperatures and residual stresses. It means that from a design point of view, the one-way heat and material deposition processing toolpath with the higher welding speed should be selected.

Keywords: event series, thermal cycles, residual stresses, multi-pass welding, abaqus am modeler

Procedia PDF Downloads 49
24537 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data

Authors: Jian-Heng Wu, Bor-Shen Lin

Abstract:

The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.

Keywords: water mass, Gaussian mixture model, data visualization, system framework

Procedia PDF Downloads 129
24536 Small-Sided Games in Football: Effect of Field Sizes on Technical Parameters

Authors: Faruk Guven, Nurtekin Erkmen, Samet Aktas, Cengiz Taskin

Abstract:

The aim of this study was to determine effects of field sizes on technical parameters of small-sided games in football players. Eight amateur football players (27.23±3.08 years, heigth: 171.01±5.36 cm, body weigth: 66.86±4.54 kg, sports experience: 12.88±3.28 years) performed 4-a-side small-sided games (SSG) with different field sizes. In SSGs, field sizes were 30 x 40 m and 26 mx24 m. SSGs was conducted as a series of 3 bouts of 6 min with 5 min recovery durations. All SSGs were video recorded using two digital video camcorder positioned on a tripot. Shoot on taget, passes, succesful passes, unsuccesful passes, dripling, tackle, possession in SSGs were counted by Mathball Match Analysis System. The effects of bouts on technical score were examined separately using a Friedman’s test. Mann Whitney U test was applied to analyse differences between field sizes. There were no significant differences in shoots on target, total pass, successful pass, tackle, interception, possession between bouts in 30x40 m field size (p>0.05). Unsuccessful pass in bout 3 for 30x40 m field size was lower than bout 1 and bout 2 (p<0.05) and dripling in bout 3 was lower than bout 2 (p<0.05). There was no significant difference in technical actions between bouts for 26x34 m field size (p>0.05). Shoot on target in SSG with 26 x 34 m field size was higher than SSG with 30x40 m field size (p<0.05). Unsuccessful pass for 26x34 m field size in bout 3 was higher than SSG with 30x40 m field size (p<0.05). There was no significant difference in technical actions between field sizes (p>0.05). In conclusion; in this study demonstrates that technical actions in a-4-side SSG are not influenced by different field sizes (for 30x40 m and 26x34 m field sizes). This consequence is same for both total SSG time and each bout. Dripling and unsuccessful pass decrease in bout 3 during SSG in 30 x 40 m field size.

Keywords: small-sided games, football, technical actions, sport science

Procedia PDF Downloads 532
24535 The Distribution of Prevalent Supplemental Nutrition Assistance Program-Authorized Food Store Formats Differ by U.S. Region and Rurality: Implications for Food Access and Obesity Linkages

Authors: Bailey Houghtaling, Elena Serrano, Vivica Kraak, Samantha Harden, George Davis, Sarah Misyak

Abstract:

United States (U.S.) Department of Agriculture Supplemental Nutrition Assistance Program (SNAP) participants are low-income Americans receiving federal dollars for supplemental food and beverage purchases. Participants use a variety of (traditional/non-traditional) SNAP-authorized stores for household dietary purchases - also representing food access points for all Americans. Importantly consumers' food and beverage purchases from non-traditional store formats tend to be higher in saturated fats, added sugars, and sodium when compared to purchases from traditional (e.g., grocery/supermarket) formats. Overconsumption of energy-dense and low-nutrient food and beverage products contribute to high obesity rates and adverse health outcomes that differ in severity among urban/rural U.S. locations and high/low-income populations. Little is known about the SNAP-authorized food store format landscape nationally, regionally, or by urban-rural status, as traditional formats are currently used as the gold standard in food access research. This research utilized publicly available U.S. databases to fill this large literature gap and to provide insight into modes of food access for vulnerable U.S. populations: (1) SNAP Retailer Locator which provides a list of all authorized food stores in the U.S., and; (2) Rural-Urban Continuum Codes (RUCC) that categorize U.S. counties as urban (RUCC 1-3) or rural (RUCC 4-9). Frequencies were determined for the highest occurring food store formats nationally and within two regionally diverse U.S. states – Virginia in the east and California in the west. Store format codes were assigned (e.g., grocery, drug, convenience, mass merchandiser, supercenter, dollar, club, or other). RUCC was applied to investigate state-level differences in urbanity-rurality regarding prevalent food store formats and Chi Square test of independence was used to determine if food store format distributions significantly (p < 0.05) differed by region or rurality. The resulting research sample that represented highly prevalent SNAP-authorized food stores nationally included 41.25% of all SNAP stores in the U.S. (N=257,839), comprised primarily of convenience formats (31.94%) followed by dollar (25.58%), drug (19.24%), traditional (10.87%), supercenter (6.85%), mass merchandiser (1.62%), non-food store or restaurant (1.81%), and club formats (1.09%). Results also indicated that the distribution of prevalent SNAP-authorized formats significantly differed by state. California had a lower proportion of traditional (9.96%) and a higher proportion of drug (28.92%) formats than Virginia- 11.55% and 19.97%, respectively (p < 0.001). Virginia also had a higher proportion of dollar formats (26.11%) when compared to California (10.64%) (p < 0.001). Significant differences were also observed for rurality variables (p < 0.001). Prominently, rural Virginia had a significantly higher proportion of dollar formats (41.71%) when compared to urban Virginia (21.78%) and rural California (21.21%). Non-traditional SNAP-authorized formats are highly prevalent and significantly differ in distribution by U.S. region and rurality. The largest proportional difference was observed for dollar formats where the least nutritious consumer purchases are documented in the literature. Researchers/practitioners should investigate non-traditional food stores at the local level using these research findings and similar applied methodologies to determine how access to various store formats impact obesity prevalence. For example, dollar stores may be prime targets for interventions to enhance nutritious consumer purchases in rural Virginia while targeting drug formats in California may be more appropriate.

Keywords: food access, food store format, nutrition interventions, SNAP consumers

Procedia PDF Downloads 128
24534 Thermal Performance of the Extensive Wetland Green Roofs in Winter in Humid Subtropical Climate

Authors: Yi-Yu Huang, Chien-Kuo Wang, Sreerag Chota Veettil, Hang Zhang, Hu Yike

Abstract:

Regarding the pressing issue of reducing energy consumption and carbon footprint of buildings, past research has focused more on analyzing the thermal performance of the extensive terrestrial green roofs with sedum plants in summer. However, the disadvantages of this type of green roof are relatively limited thermal performance, low extreme weather adaptability, relatively higher demands in maintenance, and lower added value in healing landscape. In view of this, this research aims to develop the extensive wetland green roofs with higher thermal performance, high extreme weather adaptability, low demands in maintenance, and high added value in healing landscape, and to measure its thermal performance for buildings in winter. The following factors are considered including the type and mixing formula of growth medium (light weight soil, akadama, creek gravel, pure water) and the type of aquatic plants. The research adopts a four-stage field experiment conducting on the rooftop of a building in a humid subtropical climate. The results found that emergent (Roundleaf rotala), submerged (Ribbon weed), floating-leaved (Water lily) wetland green roofs had similar thermal performance, and superior over wetland green roof without plant, traditional terrestrial green roof (without plant), and pure water green roof (without plant, nighttime only) in terms of overall passive cooling (8.00C) and thermal insulation (4.50C) effects as well as a reduction in heat amplitude (77-85%) in winter in a humid subtropical climate. The thermal performance of the free-floating (Water hyacinth) wetland green roof is inferior to that of the other three types of wetland green roofs, whether in daytime or nighttime.

Keywords: thermal performance, extensive wetland green roof, Aquatic plant, Winter , Humid subtropical climate

Procedia PDF Downloads 163
24533 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media

Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca

Abstract:

Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.

Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks

Procedia PDF Downloads 180
24532 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: electrical impedance tomography, EIT, surgeon robot, image processing of electrical impedance tomography

Procedia PDF Downloads 262
24531 Efficacy of DAPG Producing Fluorescent Pseudomonas for Enhancing Nutrient Use Efficacy, Bio-Control of Soil-Borne Diseases and Yield of Groundnut

Authors: Basavaraj Yenagi, P. Nagaraju, C. R. Patil

Abstract:

Groundnut (Arachis hypohaea L.) is called as “King of oilseeds” and one of the most important food and cash crops in Indian subcontinent. Yield and quality of oil are negatively correlated with poor or imbalanced nutrition and constant exposure to both biotic and abiotic stress factors. Variety of diseases affect groundnut plant, most of them are caused by fungi and lead to severe yield loss. Imbalanced nutrition increases the concerns of environmental deterioration which includes soil fertility. Among different microbial antagonists, Pseudomonas is common member of the plant growth promoting rhizobacteria microflora present in the rhizosphere of groundnut. These are known to produce a beneficial effect on groundnut due to their high metabolic activity leading to the production of enzymes, exopolysaccharides, secondary metabolites, and antibiotics. The ability of pseudomonas lies on their ability to produce antibiotic metabolites such as 2, 4-diacetylphloroglucinol (DAPG). DAPG can inhibit the growth of fungal pathogens namely collar rot and stem rot and also increase the availability of plant nutrients through increased solubilization and uptake of nutrients. Hence, the present study was conducted for three consecutive years (2014 to 2016) in vertisol during the rainy season to assess the efficacy of DAPG producing fluorescent pseudomonas for enhancing nutrient use efficacy, bio-control of soil-borne diseases and yield of groundnut at University of Agricultural Sciences, Dharwad farm. The experiment was laid out in an RCBD with three replications and seven treatments. The mean of three years data revealed that the effect of DAPG-producing producing fluorescent pseudomonas enhanced groundnut yield, uptake of nitrogen and phosphorus and nutrient use efficiency and also found to be effective in bio-control of collar rot and stem rot incidence leading to increase pod yield of groundnut. Higher dry pod yield of groundnut was obtained with DAPG 2(3535 kg ha-1) closely followed by DAPG 4(3492 kg ha-1), FP 98(3443 kg ha-1), DAPG 1(3414 kg ha-1), FP 86(3361 kg ha-1) and Trichoderma spp. (3380 kg ha-1) over control(3173 kg ha-1). A similar trend was obtained with other growth and yield attributing parameters. N uptake ranged from 8.21 percent to FP 86 to 17.91 percent with DAPG 2 and P uptake ranged between 5.56 percent with FP 86 to 16.67 percent with DAPG 2 over control. The first year, there was no incidence of collar rot. During the second year, the control plot recorded 2.51 percent incidence and it ranged from 0.82 percent to 1.43 percent in different DAPG-producing fluorescent pseudomonas treatments. The similar trend was noticed in the third year with lower incidence. The stem rot incidence was recorded during all the three years. Mean data indicated that the control plot recorded 2.65 percent incidence and it ranged from 0.71 percent to 1.23 percent in different DAPG-producing fluorescent pseudomonas treatments. The increase in net monetary benefits ranged from Rs.5975 ha-1 to Rs.11407 ha 1 in different treatments. Hence, as a low-cost technology, seed treatment with available DAPG-producing fluorescent pseudomonas has a beneficial effect on groundnut for enhancing groundnut yield, nutrient use efficiency and bio-control of soil-borne diseases.

Keywords: groundnut, DAPG, fluorescent pseudomonas, nutrient use efficiency, collar rot, stem rot

Procedia PDF Downloads 164
24530 Introduction of Robust Multivariate Process Capability Indices

Authors: Behrooz Khalilloo, Hamid Shahriari, Emad Roghanian

Abstract:

Process capability indices (PCIs) are important concepts of statistical quality control and measure the capability of processes and how much processes are meeting certain specifications. An important issue in statistical quality control is parameter estimation. Under the assumption of multivariate normality, the distribution parameters, mean vector and variance-covariance matrix must be estimated, when they are unknown. Classic estimation methods like method of moment estimation (MME) or maximum likelihood estimation (MLE) makes good estimation of the population parameters when data are not contaminated. But when outliers exist in the data, MME and MLE make weak estimators of the population parameters. So we need some estimators which have good estimation in the presence of outliers. In this work robust M-estimators for estimating these parameters are used and based on robust parameter estimators, robust process capability indices are introduced. The performances of these robust estimators in the presence of outliers and their effects on process capability indices are evaluated by real and simulated multivariate data. The results indicate that the proposed robust capability indices perform much better than the existing process capability indices.

Keywords: multivariate process capability indices, robust M-estimator, outlier, multivariate quality control, statistical quality control

Procedia PDF Downloads 272