Search results for: the soil variables
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7100

Search results for: the soil variables

680 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector

Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar

Abstract:

Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.

Keywords: appliances efficiency improvement, energy star, market penetration, residential sector

Procedia PDF Downloads 288
679 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects

Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town

Abstract:

The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.

Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry

Procedia PDF Downloads 92
678 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)

Authors: Vinay Kumar Vanjakula, Frank Adam

Abstract:

The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.

Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour

Procedia PDF Downloads 170
677 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques

Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola

Abstract:

Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.

Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2

Procedia PDF Downloads 137
676 Comparative Analysis of Costs and Well Drilling Techniques for Water, Geothermal Energy, Oil and Gas Production

Authors: Thales Maluf, Nazem Nascimento

Abstract:

The development of society relies heavily on the total amount of energy obtained and its consumption. Over the years, there has been an advancement on energy attainment, which is directly related to some natural resources and developing systems. Some of these resources should be highlighted for its remarkable presence in world´s energy grid, such as water, petroleum, and gas, while others deserve attention for representing an alternative to diversify the energy grid, like geothermal sources. Therefore, because all these resources can be extracted from the underground, drilling wells is a mandatory activity in terms of exploration, and it involves a previous geological study and an adequate preparation. It also involves a cleaning process and an extraction process that can be executed by different procedures. For that reason, this research aims the enhancement of exploration processes through a comparative analysis of drilling costs and techniques used to produce them. The analysis itself is based on a bibliographical review based on books, scientific papers, schoolwork and mainly explore drilling methods and technologies, equipment used, well measurements, extraction methods, and production costs. Besides techniques and costs regarding the drilling processes, some properties and general characteristics of these sources are also compared. Preliminary studies show that there are some major differences regarding the exploration processes, mostly because these resources are naturally distinct. Water wells, for instance, have hundreds of meters of length because water is stored close to the surface, while oil, gas, and geothermal production wells can reach thousands of meters, which make them more expensive to be drilled. The drilling methods present some general similarities especially regarding the main mechanism of perforation, but since water is a resource stored closer to the surface than the other ones, there is a wider variety of methods. Water wells can be drilled by rotary mechanisms, percussion mechanisms, rotary-percussion mechanisms, and some other simpler methods. Oil and gas production wells, on the other hand, require rotary or rotary-percussion drilling with a proper structure called drill rig and resistant materials for the drill bits and the other components, mostly because they´re stored in sedimentary basins that can be located thousands of meters under the ground. Geothermal production wells also require rotary or rotary-percussion drilling and require the existence of an injection well and an extraction well. The exploration efficiency also depends on the permeability of the soil, and that is why it has been developed the Enhanced Geothermal Systems (EGS). Throughout this review study, it can be verified that the analysis of the extraction processes of energy resources is essential since these resources are responsible for society development. Furthermore, the comparative analysis of costs and well drilling techniques for water, geothermal energy, oil, and gas production, which is the main goal of this research, can enable the growth of energy generation field through the emergence of ideas that improve the efficiency of energy generation processes.

Keywords: drilling, water, oil, Gas, geothermal energy

Procedia PDF Downloads 145
675 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction

Authors: Alisawi Alaa T., Collins P. E. F.

Abstract:

The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.

Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard

Procedia PDF Downloads 101
674 The Impact of Task Type and Group Size on Dialogue Argumentation between Students

Authors: Nadia Soledad Peralta

Abstract:

Within the framework of socio-cognitive interaction, argumentation is understood as a psychological process that supports and induces reasoning and learning. Most authors emphasize the great potential of argumentation to negotiate with contradictions and complex decisions. So argumentation is a target for researchers who highlight the importance of social and cognitive processes in learning. In the context of social interaction among university students, different types of arguments are analyzed according to group size (dyads and triads) and the type of task (reading of frequency tables, causal explanation of physical phenomena, the decision regarding moral dilemma situations, and causal explanation of social phenomena). Eighty-nine first-year social sciences students of the National University of Rosario participated. Two groups were formed from the results of a pre-test that ensured the heterogeneity of points of view between participants. Group 1 consisted of 56 participants (performance in dyads, total: 28), and group 2 was formed of 33 participants (performance in triads, total: 11). A quasi-experimental design was performed in which effects of the two variables (group size and type of task) on the argumentation were analyzed. Three types of argumentation are described: authentic dialogical argumentative resolutions, individualistic argumentative resolutions, and non-argumentative resolutions. The results indicate that individualistic arguments prevail in dyads. That is, although people express their own arguments, there is no authentic argumentative interaction. Given that, there are few reciprocal evaluations and counter-arguments in dyads. By contrast, the authentically dialogical argument prevails in triads, showing constant feedback between participants’ points of view. It was observed that, in general, the type of task generates specific types of argumentative interactions. However, it is possible to emphasize that the authentically dialogic arguments predominate in the logical tasks, whereas the individualists or pseudo-dialogical are more frequent in opinion tasks. Nerveless, these relationships between task type and argumentative mode are best clarified in an interactive analysis based on group size. Finally, it is important to stress the value of dialogical argumentation in educational domains. Argumentative function not only allows a metacognitive reflection about their own point of view but also allows people to benefit from exchanging points of view in interactive contexts.

Keywords: sociocognitive interaction, argumentation, university students, size of the grup

Procedia PDF Downloads 84
673 Investigation of Xanthomonas euvesicatoria on Seed Germination and Seed to Seedling Transmission in Tomato

Authors: H. Mayton, X. Yan, A. G. Taylor

Abstract:

Infested tomato seeds were used to investigate the influence of Xanthomonas euvesicatoria on germination and seed to seedling transmission in a controlled environment and greenhouse assays in an effort to develop effective seed treatments and characterize seed borne transmission of bacterial leaf spot of tomato. Bacterial leaf spot of tomato, caused by four distinct Xanthomonas species, X. euvesicatoria, X. gardneri, X. perforans, and X. vesicatoria, is a serious disease worldwide. In the United States, disease prevention is expensive for commercial growers in warm, humid regions of the country, and crop losses can be devastating. In this study, four different infested tomato seed lots were extracted from tomato fruits infected with bacterial leaf spot from a field in New York State in 2017 that had been inoculated with X. euvesicatoria. In addition, vacuum infiltration at 61 kilopascals for 1, 5, 10, and 15 minutes and seed soaking for 5, 10, 15, and 30 minutes with different bacterial concentrations were used to artificially infest seed in the laboratory. For controlled environment assays, infested tomato seeds from the field and laboratory were placed othe n moistened blue blotter in square plastic boxes (10 cm x 10 cm) and incubated at 20/30 ˚C with an 8/16 hour light cycle, respectively. Infested tomato seeds from the field and laboratory were also planted in small plastic trays in soil (peat-lite medium) and placed in the greenhouse with 24/18 ˚C day and night temperatures, respectively, with a 14-hour photoperiod. Seed germination was assessed after eight days in the laboratory and 14 days in the greenhouse. Polymerase chain reaction (PCR) using the hrpB7 primers (RST65 [5’- GTCGTCGTTACGGCAAGGTGGTG-3’] and RST69 [5’-TCGCCCAGCGTCATCAGGCCATC-3’]) was performed to confirm presence or absence of the bacterial pathogen in seed lots collected from the field and in germinating seedlings in all experiments. For infested seed lots from the field, germination was lowest (84%) in the seed lot with the highest level of bacterial infestation (55%) and ranged from 84-98%. No adverse effect on germination was observed from artificially infested seeds for any bacterial concentration and method of infiltration when compared to a non-infested control. Germination in laboratory assays for artificially infested seeds ranged from 82-100%. In controlled environment assays, 2.5 % were PCR positive for the pathogen, and in the greenhouse assays, no infected seedlings were detected. From these experiments, X. euvesicatoria does not appear to adversely influence germination. The lowest rate of germination from field collected seed may be due to contamination with multiple pathogens and saprophytic organisms as no effect of artificial bacterial seed infestation in the laboratory on germination was observed. No evidence of systemic movement from seed to seedling was observed in the greenhouse assays; however, in the controlled environment assays, some seedlings were PCR positive. Additional experiments are underway with green fluorescent protein-expressing isolates to further characterize seed to seedling transmission of the bacterial leaf spot pathogen in tomato.

Keywords: bacterial leaf spot, seed germination, tomato, Xanthomonas euvesicatoria

Procedia PDF Downloads 135
672 Application of Neuroscience in Aligning Instructional Design to Student Learning Style

Authors: Jayati Bhattacharjee

Abstract:

Teaching is a very dynamic profession. Teaching Science is as much challenging as Learning the subject if not more. For instance teaching of Chemistry. From the introductory concepts of subatomic particles to atoms of elements and their symbols and further presenting the chemical equation and so forth is a challenge on both side of the equation Teaching Learning. This paper combines the Neuroscience of Learning and memory with the knowledge of Learning style (VAK) and presents an effective tool for the teacher to authenticate Learning. The model of ‘Working Memory’, the Visio-spatial sketchpad, the central executive and the phonological loop that transforms short-term memory to long term memory actually supports the psychological theory of Learning style i.e. Visual –Auditory-Kinesthetic. A closer examination of David Kolbe’s learning model suggests that learning requires abilities that are polar opposites, and that the learner must continually choose which set of learning abilities he or she will use in a specific learning situation. In grasping experience some of us perceive new information through experiencing the concrete, tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. Others tend to perceive, grasp, or take hold of new information through symbolic representation or abstract conceptualization – thinking about, analyzing, or systematically planning, rather than using sensation as a guide. Similarly, in transforming or processing experience some of us tend to carefully watch others who are involved in the experience and reflect on what happens, while others choose to jump right in and start doing things. The watchers favor reflective observation, while the doers favor active experimentation. Any lesson plan based on the model of Prescriptive design: C+O=M (C: Instructional condition; O: Instructional Outcome; M: Instructional method). The desired outcome and conditions are independent variables whereas the instructional method is dependent hence can be planned and suited to maximize the learning outcome. The assessment for learning rather than of learning can encourage, build confidence and hope amongst the learners and go a long way to replace the anxiety and hopelessness that a student experiences while learning Science with a human touch in it. Application of this model has been tried in teaching chemistry to high school students as well as in workshops with teachers. The response received has proven the desirable results.

Keywords: working memory model, learning style, prescriptive design, assessment for learning

Procedia PDF Downloads 352
671 Achieving Household Electricity Saving Potential Through Behavioral Change

Authors: Lusi Susanti, Prima Fithri

Abstract:

The rapid growth of Indonesia population is directly proportional to the energy needs of the country, but not all of Indonesian population can relish the electricity. Indonesia's electrification ratio is still around 80.1%, which means that approximately 19.9% of households in Indonesia have not been getting the flow of electrical energy. Household electricity consumptions in Indonesia are generally still dominated by the public urban. In the city of Padang, West Sumatera, Indonesia, about 94.10% are power users of government services (PLN). The most important thing of the issue is human resources efficient energy. User behavior in utilizing electricity becomes significant. However repair solution will impact the user's habits sustainable energy issues. This study attempts to identify the user behavior and lifestyle that affect household electricity consumption and to evaluate the potential for energy saving. The behavior component is frequently underestimated or ignored in analyses of household electrical energy end use, partly because of its complexity. It is influenced by socio-demographic factors, culture, attitudes, aesthetic norms and comfort, as well as social and economic variables. Intensive questioner survey, in-depth interview and statistical analysis are carried out to collect scientific evidences of the behavioral based changes instruments to reduce electricity consumption in household sector. The questioner was developed to include five factors assuming affect the electricity consumption pattern in household sector. They are: attitude, energy price, household income, knowledge and other determinants. The survey was carried out in Padang, West Sumatra Province Indonesia. About 210 questioner papers were proportionally distributed to households in 11 districts in Padang. Stratified sampling was used as a method to select respondents. The results show that the household size, income, payment methods and size of house are factors affecting electricity saving behavior in residential sector. Household expenses on electricity are strongly influenced by gender, type of job, level of education, size of house, income, payment method and level of installed power. These results provide a scientific evidence for stakeholders on the potential of controlling electricity consumption and designing energy policy by government in residential sector.

Keywords: electricity, energy saving, household, behavior, policy

Procedia PDF Downloads 440
670 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 241
669 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 145
668 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 257
667 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions

Authors: Peter Carter

Abstract:

Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”

Keywords: climate change, climate change indicators, climate change trends, climate system change interactions

Procedia PDF Downloads 105
666 Locus of Control and Self-Esteem as Predictors of Maternal and Child Healthcare Services Utilization in Nigeria

Authors: Josephine Aikpitanyi, Friday Okonofua, Lorrettantoimo, Sandy Tubeuf

Abstract:

Every day, 800 women die from conditions related to pregnancy and childbirth, resulting in an estimated 300,000 maternal deaths worldwide per year. Over 99 percent of all maternal deaths occur in developing countries, with more than half of them occurring in sub-Saharan Africa. Nigeria being the most populous nation in sub-Saharan Africa bears a significant burden of worsening maternal and child health outcomes with a maternal mortality rate of 917 per 100,000 live births and child mortality rate of 117 per 1,000 live births. While several studies have documented that financial barriers disproportionately discourage poor women from seeking needed maternal and child healthcare, other studies have indicated otherwise. Evidence shows that there are instances where health facilities with skilled healthcare providers exist, and yet maternal, and child health outcomes remain abysmally low, indicating the presence of non-cognitive and behavioural factors that may affect the utilization of healthcare services. This study investigated the influence of locus of control and self-esteem on utilization of maternal and child healthcare services in Nigeria. Specifically, it explored the differences in utilization of antenatal care, skilled birth care, postnatal care, and child vaccination by women having an internal and external locus of control and women having high and low self-esteem. We collected information on non-cognitive traits of 1411 randomly selected women, along with information on utilization of the various indicators of maternal and child healthcare. Estimating logistic regression models for various components of healthcare services utilization, we found that women’s internal locus of control was a significant predictor of utilization of antenatal care, skilled birth care, and completion of child vaccination. We also found that having high self-esteem was a significant predictor of utilization of antenatal care, postnatal care, and completion of child vaccination after adjusting for other control variables. By improving our understanding of non-cognitive traits as possible barriers to maternal and child healthcare utilization, our findings offer important insights for enhancing participant engagement in intervention programs that are initiated to improve maternal and child health outcomes in low-and-middle-income countries.

Keywords: behavioural economics, health-seeking behaviour, locus of control and self-esteem, maternal and child healthcare, non-cognitive traits, and healthcare utilization

Procedia PDF Downloads 167
665 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets

Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah

Abstract:

Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.

Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs

Procedia PDF Downloads 488
664 Association between Cholesterol Levels and Atopy among Adolescents with and without Sufficient Amount of Physical Activity

Authors: Keith T. S. Tung, H. W. Tsang, Rosa S. Wong, Frederick K. Ho, Patrick Ip

Abstract:

Objectives: Atopic diseases are increasingly prevalent among children and adolescents, both locally and internationally. One of the possible contributing factors could be the hypercholesterolemia which leads to cholesterol accumulation in macrophages and other immune cells that would eventually promote inflammatory responses, including augmentation of toll-like receptor (TLR). Meanwhile, physical activity is well known for its beneficial effects against the condition of hypercholesterolemia and incidence of atopic diseases. This study, therefore, explored whether atopic diseases were associated with increased cholesterol levels and whether physical activity habit influenced this association. Methods: This is a sub-study derived from the longitudinal cohort study which recruited a group of children at five years of age in Kindergarten 3 (K3) to investigate the long-term impact of family socioeconomic status on child development. In 2018/19, adolescents (average age: 13 years old) were asked to report their physical activity habit and history of any atopic diseases. During health assessment, peripheral blood samples were collected from the adolescents to study their lipid profile [total cholesterol, high-density lipoprotein (HDL)-cholesterol, and low-density lipoprotein (LDL)-cholesterol]. Regression analyses were performed to test the relationships between variables of interest. Results: Among the 315 adolescents, 99 (31.4%) reported to have allergic rhinitis. There were 45 (14.3%) with eczema, 17 (5.4%) with a food allergy, and 12 (3.8%) with asthma. Regression analyses showed that adolescents with a history of any type of atopic diseases had significantly higher total cholesterol (B=13.3, p < 0.01) and LDL cholesterol (B=7.9, p < 0.05) levels. Further subgroup analyses were conducted to examine the effect of physical activity level on the association between atopic diseases and cholesterol levels. We found stronger associations among those who did not meet the World Health Organization recommendation of at least 60 minutes of moderate-to-vigorous activities each day (total cholesterol: B=15.5, p < 0.01; LDL cholesterol: B=10.4, p < 0.05). For those who met this recommendation, the associations between atopic diseases and cholesterol levels became insignificant. Conclusion: Our study results support the current research evidence on the relationship between an elevated level of cholesterol and atopic diseases. More importantly, our results provide preliminary support for the protective effect of regular exercises against elevated cholesterol level due to atopic diseases. The findings highlight the importance of a healthy lifestyle for keeping cholesterol levels in the normal range, which can bring benefits to both physical and mental health.

Keywords: atopic diseases, Chinese adolescents, cholesterol level, physical activity

Procedia PDF Downloads 122
663 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 300
662 Choice Analysis of Ground Access to São Paulo/Guarulhos International Airport Using Adaptive Choice-Based Conjoint Analysis (ACBC)

Authors: Carolina Silva Ansélmo

Abstract:

Airports are demand-generating poles that affect the flow of traffic around them. The airport access system must be fast, convenient, and adequately planned, considering its potential users. An airport with good ground access conditions can provide the user with a more satisfactory access experience. When several transport options are available, service providers must understand users' preferences and the expected quality of service. The present study focuses on airport access in a comparative scenario between bus, private vehicle, subway, taxi and urban mobility transport applications to São Paulo/Guarulhos International Airport. The objectives are (i) to identify the factors that influence the choice, (ii) to measure Willingness to Pay (WTP), and (iii) to estimate the market share for each modal. The applied method was Adaptive Choice-based Conjoint Analysis (ACBC) technique using Sawtooth Software. Conjoint analysis, rooted in Utility Theory, is a survey technique that quantifies the customer's perceived utility when choosing alternatives. Assessing user preferences provides insights into their priorities for product or service attributes. An additional advantage of conjoint analysis is its requirement for a smaller sample size compared to other methods. Furthermore, ACBC provides valuable insights into consumers' preferences, willingness to pay, and market dynamics, aiding strategic decision-making to provide a better customer experience, pricing, and market segmentation. In the present research, the ACBC questionnaire had the following variables: (i) access time to the boarding point, (ii) comfort in the vehicle, (iii) number of travelers together, (iv) price, (v) supply power, and (vi) type of vehicle. The case study questionnaire reached 213 valid responses considering the scenario of access from the São Paulo city center to São Paulo/Guarulhos International Airport. As a result, the price and the number of travelers are the most relevant attributes for the sample when choosing airport access. The market share of the selection is mainly urban mobility transport applications, followed by buses, private vehicles, taxis and subways.

Keywords: adaptive choice-based conjoint analysis, ground access to airport, market share, willingness to pay

Procedia PDF Downloads 79
661 Natural Mexican Zeolite Modified with Iron to Remove Arsenic Ions from Water Sources

Authors: Maritza Estela Garay-Rodriguez, Mirella Gutierrez-Arzaluz, Miguel Torres-Rodriguez, Violeta Mugica-Alvarez

Abstract:

Arsenic is an element present in the earth's crust and is dispersed in the environment through natural processes and some anthropogenic activities. Naturally released into the environment through the weathering and erosion of sulphides mineral, some activities such as mining, the use of pesticides or wood preservatives potentially increase the concentration of arsenic in air, water, and soil. The natural arsenic release of a geological material is a threat to the world's drinking water sources. In aqueous phase is found in inorganic form, as arsenate and arsenite mainly, the contamination of groundwater by salts of this element originates what is known as endemic regional hydroarsenicism. The International Agency for Research on Cancer (IARC) categorizes the inorganic As within group I, as a substance with proven carcinogenic action for humans. It has been found the presence of As in groundwater in several countries such as Argentina, Mexico, Bangladesh, Canada and the United States. Regarding the concentration of arsenic in drinking water according to the World Health Organization (WHO) and the Environmental Protection Agency (EPA) establish maximum concentrations of 10 μg L⁻¹. In Mexico, in some states as Hidalgo, Morelos and Michoacán concentrations of arsenic have been found in bodies of water around 1000 μg L⁻¹, a concentration that is well above what is allowed by Mexican regulations with the NOM-127- SSA1-1994 that establishes a limit of 25 μg L⁻¹. Given this problem in Mexico, this research proposes the use of a natural Mexican zeolite (clinoptilolite type) native to the district of Etla in the central valley region of Oaxaca, as an adsorbent for the removal of arsenic. The zeolite was subjected to a conditioning with iron oxide by the precipitation-impregnation method with 0.5 M iron nitrate solution, in order to increase the natural adsorption capacity of this material. The removal of arsenic was carried out in a column with a fixed bed of conditioned zeolite, since it combines the advantages of a conventional filter with those of a natural adsorbent medium, providing a continuous treatment, of low cost and relatively easy to operate, for its implementation in marginalized areas. The zeolite was characterized by XRD, SEM/EDS, and FTIR before and after the arsenic adsorption tests, the results showed that the modification methods used are adequate to prepare adsorbent materials since it does not modify its structure, the results showed that with a particle size of 1.18 mm, an initial concentration of As (V) ions of 1 ppm, a pH of 7 and at room temperature, a removal of 98.7% was obtained with an adsorption capacity of 260 μg As g⁻¹ zeolite. The results obtained indicated that the conditioned zeolite is favorable for the elimination of arsenate in water containing up to 1000 μg As L⁻¹ and could be suitable for removing arsenate from pits of water.

Keywords: adsorption, arsenic, iron conditioning, natural zeolite

Procedia PDF Downloads 173
660 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 142
659 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 279
658 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 138
657 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River

Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang

Abstract:

The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.

Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander

Procedia PDF Downloads 319
656 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring

Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon

Abstract:

We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.

Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch

Procedia PDF Downloads 185
655 Cyber Bullying, Online Risks and Parental Mediation: A Comparison between Adolescent Reports and Parent Perceptions in South Africa

Authors: Masa Popovac, Philip Fine

Abstract:

Information and Communication Technologies (ICTs) have altered our social environments, and young people in particular have immersed themselves in the digital age. Despite countless benefits, younger ICT users are being exposed to various online risks such as contact with strangers, viewing of risky content, sending or receiving sexually themed images or comments (i.e. ‘sexting’) as well as cyber bullying. Parents may not be fully aware of the online spaces their children inhabit and often struggle to implement effective mediation strategies. This quantitative study explored (i) three types of online risks (contact risks, content risks and conduct risks), (ii) cyber bullying victimization and perpetration, and (iii) parental mediation among a sample of 689 South African adolescents aged between 12-17 years. Survey data was also collected for 227 of their parents relating to their perceptions of their child’s online experiences. A comparison between adolescent behaviors and parental perceptions was examined on the three variables in the study. Findings reveal various online risk taking behaviors. In terms of contact risks, 56% of adolescents reported having contact with at least one online stranger, with many meeting these strangers in person. Content risks included exposure to harmful information such as websites promoting extreme diets or self-harm as well as inappropriate content: 84% of adolescents had seen violent content and 75% had seen sexual content online. Almost 60% of adolescents engaged in conduct risks such as sexting. Eight online victimization behaviors were examined in the study and 79% of adolescents had at least one of these negative experiences, with a third (34%) defining this experience as cyber bullying. A strong connection between victimization and perpetration was found, with 63% of adolescents being both a victim and perpetrator. Very little parental mediation of ICT use was reported. Inferential statistics revealed that parents consistently underestimated their child’s online risk taking behaviors as well as their cyber bullying victimization and perpetration. Parents also overestimated mediation strategies in the home. The generational gap in the knowledge and use of ICTs is a barrier to effective parental mediation and online safety, since many negative online experiences by adolescents go undetected and can continue for extended periods of time thereby exacerbating the potential psychological and emotional distress. The study highlights the importance of including parents in online safety efforts.

Keywords: cyber bullying, online risk behaviors, parental mediation, South Africa

Procedia PDF Downloads 484
654 Wastewater Treatment in the Abrasives Industry via Fenton and Photo-Fenton Oxidation Processes: A Case Study from Peru

Authors: Hernan Arturo Blas López, Gustavo Henndel Lopes, Antonio Carlos Silva Costa Teixeira, Carmen Elena Flores Barreda, Patricia Araujo Pantoja

Abstract:

Phenols are toxic for life and the environment and may come from many sources. Uncured phenolic monomers present in phenolic resins used as binders in grinding wheels and emery paper can contaminate industrial wastewaters in abrasives manufacture plants. Furthermore, vestiges of resol and novolacs resins generated by wear and tear of abrasives are also possible sources of water contamination by phenolics in these facilities. Fortunately, advanced oxidation by dark Fenton and photo-Fenton techniques are capable of oxidizing phenols and their degradation products up to their mineralization into H₂O and CO₂. The maximal allowable concentrations for phenols in Peruvian waterbodies is very low, such that insufficiently treated effluents from the abrasives industry are a potential environmental noncompliance. The current case study highlights findings obtained during the lab-scale application of Fenton’s and photo-assisted Fenton’s chemistries to real industrial wastewater samples from an abrasives manufacture plant in Peru. The goal was to reduce the phenolic content and sample toxicity. For this purpose, two independent variables-reaction time and effect of ultraviolet radiation–were studied as for their impacts on the concentration of total phenols, total organic carbon (TOC), biological oxygen demand (BOD) and chemical oxygen demand (COD). In this study, diluted samples (1 L) of the industrial effluent were treated with Fenton’s reagent (H₂O₂ and Fe²⁺ from FeSO₄.H₂O) during 10 min in a photochemical batch reactor (Alphatec RFS-500, Brazil) at pH 2.92. In the case of photo-Fenton tests with ultraviolet lamps of 9 W, UV-A, UV-B and UV-C lamps were evaluated. All process conditions achieved 100% of phenols degraded within 5 minutes. TOC, BOD and COD decreased by 49%, 52% and 86% respectively (all processes together). However, Fenton treatment was not capable of reducing BOD, COD and TOC below a certain value even after 10 minutes, contrarily to photo-Fenton. It was also possible to conclude that the processes here studied degrade other compounds in addition to phenols, what is an advantage. In all cases, elevated effluent dilution factors and high amounts of oxidant agent impact negatively the overall economy of the processes here investigated.

Keywords: fenton oxidation, wastewater treatment, phenols, abrasives industry

Procedia PDF Downloads 317
653 The Personal Characteristics of Nurse Managers and the Personal and Professional Factors That Affect Them

Authors: Handan Alan, Ulkü Baykal

Abstract:

Personal characteristics help people understand and recognize both themselves and other people. They are also known to have direct effects on managerial behaviors. Managers’ personalities indicate how they think, perceive reality and relate to others, and affect their decision-making and problem-solving methods. This descriptive study aims to determine the personal characteristics of nurse managers and the personal and professional factors that affect them since sufficient data does not exist on personal characteristics despite the focus on the leadership and managerial characteristics in nursing. The study population consisted of nurses working in administrative positions at hospitals affiliated with the public hospitals union, research and practice hospitals affiliated with universities and private hospitals in cities in the Marmara Region. The study sample consisted of nurse managers working in the hospitals that permitted conducting the study (excluding private branch hospitals). The data were collected after obtaining the approval of the Clinical Research Ethics Committee of Çanakkale Onsekiz Mart University (Approval date: 1.7.2015, Decision No: 2015-01) and written official permissions from the administrations of the hospitals included in the study. The data analysis was carried out using means and standard deviations (SD) as descriptive statistics, one-way analysis of variance for inter-group comparisons and the independent samples t-test for paired group comparisons. A significance threshold of p < 0.05 was used to evaluate the findings. The data were collected using the Five Factor Personality Inventory. The study included 900 nurse managers, who obtained the highest mean score on the conscientiousness dimension (X=4.22 ±0.35). This dimension was followed by their mean scores on the agreeableness (X=4.06±0.40), intelligence (X=4.05±0.37), extroversion (X=3.50±0.43), and emotional instability (X=2.07±0.53) dimensions. Statistically significant differences were found between the independent variables of age, gender, marital status, education level, work institution, professional experience, institutional experience, managerial experience, administrative position, work unit and managerial education when compared using the five factor personality inventory (p < 0.05). In conclusion, the nurse managers described themselves having high conscientiousness. Statistically significant differences were found between the five factor personality inventory mean scores and their personal and professional characteristics.

Keywords: nurse manager, personality, personal characteristics, professional characteristics

Procedia PDF Downloads 258
652 Review of the Safety of Discharge on the First Postoperative Day Following Carotid Surgery: A Retrospective Analysis

Authors: John Yahng, Hansraj Riteesh Bookun

Abstract:

Objective: This was a retrospective cross-sectional study evaluating the safety of discharge on the first postoperative day following carotid surgery - principally carotid endarterectomy. Methods: Between January 2010 to October 2017, 252 patients with mean age of 72 years, underwent carotid surgery by seven surgeons. Their medical records were consulted and their operative as well as complication timelines were databased. Descriptive statistics were used to analyse pooled responses and our indicator variables. The statistical package used was STATA 13. Results: There were 183 males (73%) and the comorbid burden was as follows: ischaemic heart disease (54%), diabetes (38%), hypertension (92%), stage 4 kidney impairment (5%) and current or ex-smoking (77%). The main indications were transient ischaemic attacks (42%), stroke (31%), asymptomatic carotid disease (16%) and amaurosis fugax (8%). 247 carotid endarterectomies (109 with patch arterioplasty, 88 with eversion and transection technique, 50 with endarterectomy only) were performed. 2 carotid bypasses, 1 embolectomy, 1 thrombectomy with patch arterioplasty and 1 excision of a carotid body tumour were also performed. 92% of the cases were performed under general anaesthesia. A shunt was used in 29% of cases. The mean length of stay was 5.1 ± 3.7days with the range of 2 to 22 days. No patient was discharged on day 1. The mean time from admission to surgery was 1.4 ± 2.8 days, ranging from 0 to 19 days. The mean time from surgery to discharge was 2.7 ± 2.0 days with the of range 0 to 14 days. 36 complications were encountered over this period, with 12 failed repairs (5 major strokes, 2 minor strokes, 3 transient ischaemic attacks, 1 cerebral bleed, 1 occluded graft), 11 bleeding episodes requiring a return to the operating theatre, 5 adverse cardiac events, 3 cranial nerve injuries, 2 respiratory complications, 2 wound complications and 1 acute kidney injury. There were no deaths. 17 complications occurred on postoperative day 0, 11 on postoperative day 1, 6 on postoperative day 2 and 2 on postoperative day 3. 78% of all complications happened before the second postoperative day. Out of the complications which occurred on the second or third postoperative day, 4 (1.6%) were bleeding episodes, 1 (0.4%) failed repair , 1 respiratory complication (0.4%) and 1 wound complication (0.4%). Conclusion: Although it has been common practice to discharge patients on the second postoperative day following carotid endarterectomy, we find here that discharge on the first operative day is safe. The overall complication rate is low and most complications are captured before the second postoperative day. We suggest that patients having an uneventful first 24 hours post surgery be discharged on the first day. This should reduce hospital length of stay and the health economic burden.

Keywords: carotid, complication, discharge, surgery

Procedia PDF Downloads 166
651 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques

Authors: Melese Wondatir

Abstract:

Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.

Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making

Procedia PDF Downloads 71