Search results for: load modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4268

Search results for: load modelling

458 Impacts of Climate Change and Natural Gas Operations on the Hydrology of Northeastern BC, Canada: Quantifying the Water Budget for Coles Lake

Authors: Sina Abadzadesahraei, Stephen Déry, John Rex

Abstract:

Climate research has repeatedly identified strong associations between anthropogenic emissions of ‘greenhouses gases’ and observed increases of global mean surface air temperature over the past century. Studies have also demonstrated that the degree of warming varies regionally. Canada is not exempt from this situation, and evidence is mounting that climate change is beginning to cause diverse impacts in both environmental and socio-economic spheres of interest. For example, northeastern British Columbia (BC), whose climate is controlled by a combination of maritime, continental and arctic influences, is warming at a greater rate than the remainder of the province. There are indications that these changing conditions are already leading to shifting patterns in the region’s hydrological cycle, and thus its available water resources. Coincident with these changes, northeastern BC is undergoing rapid development for oil and gas extraction: This depends largely on subsurface hydraulic fracturing (‘fracking’), which uses enormous amounts of freshwater. While this industrial activity has made substantial contributions to regional and provincial economies, it is important to ensure that sufficient and sustainable water supplies are available for all those dependent on the resource, including ecological systems. In this turn demands a comprehensive understanding of how water in all its forms interacts with landscapes, the atmosphere, and of the potential impacts of changing climatic conditions on these processes. The aim of this study is therefore to characterize and quantify all components of the water budget in the small watershed of Coles Lake (141.8 km², 100 km north of Fort Nelson, BC), through a combination of field observations and numerical modelling. Baseline information will aid the assessment of the sustainability of current and future plans for freshwater extraction by the oil and gas industry, and will help to maintain the precarious balance between economic and environmental well-being. This project is a perfect example of interdisciplinary research, in that it not only examines the hydrology of the region but also investigates how natural gas operations and growth can affect water resources. Therefore, a fruitful collaboration between academia, government and industry has been established to fulfill the objectives of this research in a meaningful manner. This project aims to provide numerous benefits to BC communities. Further, the outcome and detailed information of this research can be a huge asset to researchers examining the effect of climate change on water resources worldwide.

Keywords: northeastern British Columbia, water resources, climate change, oil and gas extraction

Procedia PDF Downloads 243
457 The Effects of Billboard Content and Visible Distance on Driver Behavior

Authors: Arsalan Hassan Pour, Mansoureh Jeihani, Samira Ahangari

Abstract:

Distracted driving has been one of the most integral concerns surrounding our daily use of vehicles since the invention of the automobile. While much attention has been recently given to cell phones related distraction, commercial billboards along roads are also candidates for drivers' visual and cognitive distractions, as they may take drivers’ eyes from the road and their minds off the driving task to see, perceive and think about the billboard’s content. Using a driving simulator and a head-mounted eye-tracking system, speed change, acceleration, deceleration, throttle response, collision, lane changing, and offset from the center of the lane data along with gaze fixation duration and frequency data were collected in this study. Some 92 participants from a fairly diverse sociodemographic background drove on a simulated freeway in Baltimore, Maryland area and were exposed to three different billboards to investigate the effects of billboards on drivers’ behavior. Participants glanced at the billboards several times with different frequencies, the maximum of which occurred on the billboard with the highest cognitive load. About 74% of the participants didn’t look at billboards for more than two seconds at each glance except for the billboard with a short visible area. Analysis of variance (ANOVA) was performed to find the variations in driving behavior when they are invisible, readable, and post billboards area. The results show a slight difference in speed, throttle, brake, steering velocity, and lane changing, among different areas. Brake force and deviation from the center of the lane increased in the readable area in comparison with the visible area, and speed increased right after each billboard. The results indicated that billboards have a significant effect on driving performance and visual attention based on their content and visibility status. Generalized linear model (GLM) analysis showed no connection between participants’ age and driving experience with gaze duration. However, the visible distance of the billboard, gender, and billboard content had a significant effect on gaze duration.

Keywords: ANOVA, billboards, distracted driving, drivers' behavior, driving simulator, eye-Tracking system, GLM

Procedia PDF Downloads 115
456 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 374
455 Structure and Mechanics Patterns in the Assembly of Type V Intermediate-Filament Protein-Based Fibers

Authors: Mark Bezner, Shani Deri, Tom Trigano, Kfir Ben-Harush

Abstract:

Intermediate filament (IF) proteins-based fibers are among the toughest fibers in nature, as was shown by native hagfish slime threads and by synthetic fibers that are based on type V IF-proteins, the nuclear lamins. It is assumed that their mechanical performance stems from two major factors: (1) the transition from elastic -helices to stiff-sheets during tensile load; and (2) the specific organization of the coiled-coil proteins into a hierarchical network of nano-filaments. Here, we investigated the interrelationship between these two factors by using wet-spun fibers based on C. elegans (Ce) lamin. We found that Ce-lamin fibers, whether assembled in aqueous or alcoholic solutions, had the same nonlinear mechanical behavior, with the elastic region ending at ~5%. The pattern of the transition was, however, different: the ratio between -helices and -sheets/random coils was relatively constant until a 20% strain for fibers assembled in an aqueous solution, whereas for fibers assembled in 70% ethanol, the transition ended at a 6% strain. This structural phenomenon in alcoholic solution probably occurred through the transition between compacted and extended conformation of the random coil, and not between -helix and -sheets, as cycle analyses had suggested. The different transition pattern can also be explained by the different higher order organization of Ce-lamins in aqueous or alcoholic solutions, as demonstrated by introducing a point mutation in conserved residue in Ce-lamin gene that alter the structure of the Ce-lamins’ nano-fibrils. In addition, biomimicking the layered structure of silk and hair fibers by coating the Ce-lamin fiber with a hydrophobic layer enhanced fiber toughness and lead to a reversible transition between -helix and the extended conformation. This work suggests that different hierarchical structures, which are formed by specific assembly conditions, lead to diverse secondary structure transitions patterns, which in turn affect the fibers’ mechanical properties.

Keywords: protein-based fibers, intermediate filaments (IF) assembly, toughness, structure-property relationships

Procedia PDF Downloads 93
454 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 148
453 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 302
452 Microalgae for Plant Biostimulants on Whey and Dairy Wastewaters

Authors: Sergejs Kolesovs, Pavels Semjonovs

Abstract:

Whey and dairy wastewaters if disposed in the environment without proper treatment, cause serious environmental risks contributing to overall and particular environmental pollution and climate change. Biological treatment of wastewater is considered to be most eco-friendly approach, as compared to the chemical treatment methods. Research shows, that dairy wastewater can potentially be remediated by use of microalgae thussignificantly reducing the content of carbohydrates, P, N, K and other pollutants. Moreover, it has been shown, that use of dairy wastewaters results in higher microalgae biomass production. In recent decades microalgal biomass has entailed a big interest for its potential applications in pharmaceuticals, biomedicine, health supplementation, cosmetics, animal feed, plant protection, bioremediation and biofuels. It was shown, that lipids productivity on whey and dairy wastewater is higher as compared with standard cultivation media and occurred without the necessity of inducing specific stress conditions such as N starvation. Moreover, microalgae biomass production as usually associated with high production costs may benefit from perspective of both reasons – enhanced microalgae biomass or target substances productivity on cheap growth substrate and effective management of whey and dairy wastewaters, which issignificant for decrease of total production costs in both processes. Obviously, it became especially important when large volume and low cost industrial microalgal biomass production is anticipated for further use in agriculture of crops as plant growth stimulants, biopesticides soil fertilisers or remediating solutions. Environmental load of dairy wastewaters can be significantly decreased when microalgae are grown in coculture with other microorganisms. This enhances the utilisation of lactose, which is main C source in whey and dairy wastewaters when it is not metabolised easily by most microalgal species chosen. Our study showsthat certain microalgae strains can be used in treatment of residual sugars containing industrial wastewaters and decrease of their concentration thus approving that further extensive research on dairy wastewaters pre-treatment optionsfor effective cultivation of microalgae, carbon uptake and metabolism, strain selection and choice of coculture candidates is needed for further optimisation of the process.

Keywords: microalgae, whey, dairy wastewaters, sustainability, plant biostimulants

Procedia PDF Downloads 78
451 Gassing Tendency of Natural Ester Based Transformer oils: Low Alkane Generation in Stray Gassing Behaviour

Authors: Thummalapalli CSM Gupta, Banti Sidhiwala

Abstract:

Mineral oils of naphthenic and paraffinic type have been traditionally been used as insulating liquids in the transformer applications to protect the solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different type of gases representing various types of faults due to components or operating conditions effectively. While large amount of data base has been generated in the industry on dissolved gas analysis for mineral oil based transformer oils and various models for predicting the fault and analysis, oil specifications and standards have also been modified to include stray gassing limits which cover the low temperature faults and becomes an effective preventative maintenance tool that can benefit greatly to know the reasons for the breakdown of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the Stray gases evolution like hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these type esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a natural ester based transformer oil which shows very levels of stray gassing by standard test methods show much lower values compared to the products available currently and experimental results on various test conditions and the underlying mechanism explained.

Keywords: biodegadability, fire point, dissolved gassing analysis, stray gassing

Procedia PDF Downloads 80
450 Hydrogeophysical Investigations And Mapping of Ingress Channels Along The Blesbokspruit Stream In The East Rand Basin Of The Witwatersrand, South Africa

Authors: Melvin Sethobya, Sithule Xanga, Sechaba Lenong, Lunga Nolakana, Gbenga Adesola

Abstract:

Mining has been the cornerstone of the South African economy for the last century. Most of the gold mining in South Africa was conducted within the Witwatersrand basin, which contributed to the rapid growth of the city of Johannesburg and capitulated the city to becoming the business and wealth capital of the country. But with gradual depletion of resources, a stoppage in the extraction of underground water from mines and other factors relating to survival of the mining operations over a lengthy period, most of the mines were abandoned and left to pollute the local waterways and groundwater with toxins, heavy metal residue and increased acid mine drainage ensued. The Department of Mineral Resources and Energy commissioned a project whose aim is to monitor, maintain, and mitigate the adverse environmental impacts of polluted water mine water flowing into local streams affecting local ecosystems and livelihoods downstream. As part of mitigation efforts, the diagnosis and monitoring of groundwater or surface water polluted sites has become important. Geophysical surveys, in particular, Resistivity and Magnetics surveys, were selected as some of most suitable techniques for investigation of local ingress points along of one the major streams cutting through the Witwatersrand basin, namely the Blesbokspruit, which is found in the eastern part of the basin. The aim of the surveys was to provide information that could be used to assist in determining possible water loss/ ingress from the Blesbokspriut stream. Modelling of geophysical surveys results offered an in-depth insight into the interaction and pathways of polluted water through mapping of possible ingress channels near the Blesbokspruit. The resistivity - depth profile of the surveyed site exhibit a three(3) layered model with low resistivity values (10 to 200 Ω.m) overburden, which is underlain by a moderate resistivity weathered layer (>300 Ω.m), which sits on a more resistive crystalline bedrock (>500 Ω.m). Two locations of potential ingress channels were mapped across the two traverses at the site. The magnetic survey conducted at the site mapped a major NE-SW trending regional linearment with a strong magnetic signature, which was modeled to depth beyond 100m, with the potential to act as a conduit for dispersion of stream water away from the stream, as it shared a similar orientation with the potential ingress channels as mapped using the resistivity method.

Keywords: eletrictrical resistivity, magnetics survey, blesbokspruit, ingress

Procedia PDF Downloads 49
449 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 284
448 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 143
447 A Qualitative Study Exploring Factors Influencing the Uptake of and Engagement with Health and Wellbeing Smartphone Apps

Authors: D. Szinay, O. Perski, A. Jones, T. Chadborn, J. Brown, F. Naughton

Abstract:

Background: The uptake of health and wellbeing smartphone apps is largely influenced by popularity indicators (e.g., rankings), rather than evidence-based content. Rapid disengagement is common. This study aims to explore how and why potential users 1) select and 2) engage with such apps, and 3) how increased engagement could be promoted. Methods: Semi-structured interviews and a think-aloud approach were used to allow participants to verbalise their thoughts whilst searching for a health or wellbeing app online, followed by a guided search in the UK National Health Service (NHS) 'Apps Library' and Public Health England’s (PHE) 'One You' website. Recruitment took place between June and August 2019. Adults interested in using an app for behaviour change were recruited through social media. Data were analysed using the framework approach. The analysis is both inductive and deductive, with the coding framework being informed by the Theoretical Domains Framework. The results are further mapped onto the COM-B (Capability, Opportunity, Motivation - Behaviour) model. The study protocol is registered on the Open Science Framework (https://osf.io/jrkd3/). Results: The following targets were identified as playing a key role in increasing the uptake of and engagement with health and wellbeing apps: 1) psychological capability (e.g., reduced cognitive load); 2) physical opportunity (e.g., low financial cost); 3) social opportunity (e.g., embedded social media); 4) automatic motivation (e.g., positive feedback). Participants believed that the promotion of evidence-based apps on NHS-related websites could be enhanced through active promotion on social media, adverts on the internet, and in general practitioner practices. Future Implications: These results can inform the development of interventions aiming to promote the uptake of and engagement with evidence-based health and wellbeing apps, a priority within the UK NHS Long Term Plan ('digital first'). The targets identified across the COM-B domains could help organisations that provide platforms for such apps to increase impact through better selection of apps.

Keywords: behaviour change, COM-B model, digital health, mhealth

Procedia PDF Downloads 141
446 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development

Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo

Abstract:

Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.

Keywords: maritime training, oil spill response, simulation, vessel manoeuvring

Procedia PDF Downloads 151
445 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process

Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa

Abstract:

Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.

Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas

Procedia PDF Downloads 71
444 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 156
443 ESL Material Evaluation: The Missing Link in Nigerian Classrooms

Authors: Abdulkabir Abdullahi

Abstract:

The paper is a pre-use evaluation of grammar activities in three primary English course books (two of which are international primary English course books and the other a popular Nigerian primary English course book). The titles are - Cambridge Global English, Collins International Primary English, and Nigeria Primary English – Primary English. Grammar points and grammar activities in the three-course books were identified, grouped, and evaluated. The grammar activity which was most common in the course books, simple past tense, was chosen for evaluation, and the units which present simple past tense activities were selected to evaluate the extent to which the treatment of simple past tense in each of the course books help the young learners of English as a second language in Nigeria, aged 8 – 11, level A1 to A2, who lack the basic grammatical knowledge, to know grammar/communicate effectively. A bespoke checklist was devised, through the modification of existing checklists for the purpose of the evaluation, to evaluate the extent to which the grammar activities promote the communicative effectiveness of Nigerian learners of English as a second language. The results of the evaluation and the analysis of the data reveal that the treatment of grammar, especially the treatment of the simple past tense, is evidently insufficient. While Cambridge Global English’s, and Collins International Primary English’s treatment of grammar, the simple past tense, is underpinned by state-of-the-art theories of learning, language learning theories, second language learning principles, second language curriculum-syllabus design principles, grammar learning and teaching theories, the grammar load is insignificantly low, and the grammar tasks do not promote creative grammar practice sufficiently. Nigeria Primary English – Primary English, on the other hand, treats grammar, the simple past tense, in the old-fashioned direct way. The book does not favour the communicative language teaching approach; no opportunity for learners to notice and discover grammar rules for themselves, and the book lacks the potency to promote creative grammar practice. The research and its findings, therefore, underscore the need to improve grammar contents and increase grammar activity types which engage learners effectively and promote sufficient creative grammar practice in EFL and ESL material design and development.

Keywords: evaluation, activity, second language, activity-types, creative grammar practice

Procedia PDF Downloads 63
442 The Importance of Developing Pedagogical Agency Capacities in Initial Teacher Formation: A Critical Approach to Advance in Social Justice

Authors: Priscilla Echeverria

Abstract:

This paper addresses initial teacher formation as a formative space in which pedagogy students develop a pedagogical agency capacity to contribute to social justice, considering ethical, political, and epistemic dimensions. This paper is structured by discussing first the concepts of agency, pedagogical interaction, and social justice from a critical perspective; and continues offering preliminary results on the capacity of pedagogical agency in novice teachers after the analysis of critical incidents as a research methodology. This study is motivated by the concern that responding to the current neoliberal scenario, many initial teacher formation (ITF) programs have reduced the meaning of education to instruction, and pedagogy to methodology, favouring the formation of a technical professional over a reflective or critical one. From this concern, this study proposes that the restitution of the subject is an urgent task in teacher formation, so it is essential to enable him in his capacity for action and advance in eliminating institutionalized oppression insofar as it affects that capacity. Given that oppression takes place in human interaction, through this work, I propose that initial teacher formation develops sensitivity and educates the gaze to identify oppression and take action against it, both in pedagogical interactions -which configure political, ethical, and epistemic subjectivities- as in the hidden and official curriculum. All this from the premise that modelling democratic and dialogical interactions are basic for any program that seeks to contribute to a more just and empowered society. The contribution of this study lies in the fact that it opens a discussion in an area about which we know little: the impact of the type of interactions offered by university teaching at ITF on the capacity of future teachers to be pedagogical agents. For this reason, this study seeks to gather evidence of the result of this formation, analysing the capacity of pedagogical agency of novice teachers, or, in other words, how capable the graduates of secondary pedagogies are in their first pedagogical experiences to act and make decisions putting the formative purposes that they are capable of autonomously defining before technical or bureaucratic issues imposed by the curriculum or the official culture. This discussion is part of my doctoral research, "The importance of developing the capacity for ethical-political-epistemic agency in novice teachers during initial teacher formation to contribute to social justice", which I am currently developing in the Educational Research program of the University of Lancaster, United Kingdom, as a Conicyt fellow for the 2019 cohort.

Keywords: initial teacher formation, pedagogical agency, pedagogical interaction, social justice, hidden curriculum

Procedia PDF Downloads 75
441 Parametric Study of a Washing Machine to Develop an Energy Efficient Program Regarding the Enhanced Washing Efficiency Index and Micro Organism Removal Performance

Authors: Peli̇n Yilmaz, Gi̇zemnur Yildiz Uysal, Emi̇ne Bi̇rci̇, Berk Özcan, Burak Koca, Ehsan Tuzcuoğlu, Fati̇h Kasap

Abstract:

Development of Energy Efficient Programs (EEP) is one of the most significant trends in the wet appliance industry of the recent years. Thanks to the EEP, the energy consumption of a washing machine as one of the most energy-consuming home appliances can shrink considerably, while its washing performance and the textile hygiene should remain almost unchanged. Here in, the goal of the present study is to achieve an optimum EEP algorithm providing excellent textile hygiene results as well as cleaning performance in a domestic washing machine. In this regard, steam-pretreated cold wash approach with a combination of innovative algorithm solution in a relatively short washing cycle duration was implemented. For the parametric study, steam exposure time, washing load, total water consumption, main-washing time, and spinning rpm as the significant parameters affecting the textile hygiene and cleaning performance were investigated within a Design of Experiment study using Minitab 2021 statistical program. For the textile hygiene studies, specific loads containing the contaminated cotton carriers with Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa bacteria were washed. Then, the microbial removal performance of the designed programs was expressed as log reduction calculated as a difference of microbial count per ml of the liquids in which the cotton carriers before and after washing. For the cleaning performance studies, tests were carried out with various types of detergents and EMPA Standard Stain Strip. According to the results, the optimum EEP program provided an excellent hygiene performance of more than 2 log reduction of microorganism and a perfect Washing Efficiency Index (Iw) of 1.035, which is greater than the value specified by EU ecodesign regulation 2019/2023.

Keywords: washing machine, energy efficient programs, hygiene, washing efficiency index, microorganism, escherichia coli, staphylococcus aureus, pseudomonas aeruginosa, laundry

Procedia PDF Downloads 109
440 Clinical Evidence of the Efficacy of ArtiCovid (Artemisia Annua Extract) on Covid-19 Patients in DRC

Authors: Md, MCS, MPH Munyangi Wa Nkola Jerome

Abstract:

The pandemic of COVID-19, a recently discovered contagious respiratory disease called SARS-CoV-2 (Severe Acute Respiratory Syndrome-Coronavirus 2 Majority of people infected with SARS-CoV-2: Asymptomatic or mildly ill 14% of patients will develop severe illness requiring hospitalization and oxygen support, and 5% of these will be transferred to an intensive care unit, Urgent need for new treatments that can be used quickly to avoid transfer of patients to intensive care and death. Objective: To evaluate the clinical activity (efficacy) of ArtiCovid Hypothesis: Administration of 3 times a teaspoon per day by COVID patients (symptomatic, mild, or moderate forms) results in the disappearance of symptoms and improvement of biological parameters (including viral suppression). Clinical efficacy: the disappearance of clinical signs after seven days of treatment; reduction in the rate of patients transferred to intensive care units for mechanical ventilation and a decrease in mortality related to this infection Paraclinical efficacy: improvement of biological parameters (mainly d-dimer, CRP) Virological efficacy: suppression of the viral load after seven days of treatment (control test on the seventh day is negative) Pilot study using a standardized solution based on Artemisia annua (ARTICOVID) Obtaining authorization from the health authorities of the province of Central Kongo Recruitment of volunteer patients, mainly in the Kinkanda HospitalCarrying out tests before and after treatment as well as analyses before and after treatment. The protocol obtained the approval of the ethics committee 50 patients who completed the treatment were aged between 2 and 70 years, with an average age of 36 yearsMore half were male (56%). One in four patients was a health professional (25%) Of the 12 health professionals, 4 were physicians. For those who reported the date of onset of the disease, the average duration between the appearance of the first symptoms and the medical consultation was 5 days. The 50 patients put on ARTICOVID were discharged alive with CRP levels substantially normalizedAfter seven to eight days, the control test came back negative. This pilot study suggests that ARTICOVID may be effective against COVID-19 infection.

Keywords: artiCovid, DRC, Covid-19, SARS_COV_2

Procedia PDF Downloads 104
439 Enhanced Near-Infrared Upconversion Emission Based Lateral Flow Immunoassay for Background-Free Detection of Avian Influenza Viruses

Authors: Jaeyoung Kim, Heeju Lee, Huijin Jung, Heesoo Pyo, Seungki Kim, Joonseok Lee

Abstract:

Avian influenza viruses (AIV) are the primary cause of highly contagious respiratory diseases caused by type A influenza viruses of the Orthomyxoviridae family. AIV are categorized on the basis of types of surface glycoproteins such as hemagglutinin and neuraminidase. Certain H5 and H7 subtypes of AIV have evolved to the high pathogenic avian influenza (HPAI) virus, which has caused considerable economic loss to the poultry industry and led to severe public health crisis. Several commercial kits have been developed for on-site detection of AIV. However, the sensitivity of these methods is too low to detect low virus concentrations in clinical samples and opaque stool samples. Here, we introduced a background-free near-infrared (NIR)-to-NIR upconversion nanoparticle-based lateral flow immunoassay (NNLFA) platform to yield a sensor that detects AIV within 20 minutes. Ca²⁺ ion in the shell was used to enhance the NIR-to-NIR upconversion photoluminescence (PL) emission as a heterogeneous dopant without inducing significant changes in the morphology and size of the UCNPs. In a mixture of opaque stool samples and gold nanoparticles (GNPs), which are components of commercial AIV LFA, the background signal of the stool samples mask the absorption peak of GNPs. However, UCNPs dispersed in the stool samples still show strong emission centered at 800 nm when excited at 980 nm, which enables the NNLFA platform to detect 10-times lower viral load than a commercial GNP-based AIV LFA. The detection limit of NNLFA for low pathogenic avian influenza (LPAI) H5N2 and HPAI H5N6 viruses was 10² EID₅₀/mL and 10³.⁵ EID₅₀/mL, respectively. Moreover, when opaque brown-colored samples were used as the target analytes, strong NIR emission signal from the test line in NNLFA confirmed the presence of AIV, whereas commercial AIV LFA detected AIV with difficulty. Therefore, we propose that this rapid and background-free NNLFA platform has the potential of detecting AIV in the field, which could effectively prevent the spread of these viruses at an early stage.

Keywords: avian influenza viruses, lateral flow immunoassay on-site detection, upconversion nanoparticles

Procedia PDF Downloads 149
438 An Integrated Approach to Solid Waste Management of Karachi, Pakistan (Waste-to-Energy Options)

Authors: Engineer Dilnawaz Shah

Abstract:

Solid Waste Management (SWM) is perhaps one of the most important elements constituting the environmental health and sanitation of the urban developing sector. The management system has several components that are integrated as well as interdependent; thus, the efficiency and effectiveness of the entire system are affected when any of its functional components fails or does not perform up to the level mark of operation. Sindh Solid Waste Management Board (SSWMB) is responsible for the management of solid waste in the entire city. There is a need to adopt the engineered approach in the redesigning of the existing system. In most towns, street sweeping operations have been mechanized and done by machinery operated by vehicles. Construction of Garbage Transfer Stations (GTS) at a number of locations within the city will cut the cost of transportation of waste to disposal sites. Material processing, recovery of recyclables, compaction, volume reduction, and increase in density will enable transportation of waste to disposal sites/landfills via long vehicles (bulk transport), minimizing transport/traffic and environmental pollution-related issues. Development of disposal sites into proper sanitary landfill sites is mandatory. The transportation mechanism is through garbage vehicles using either hauled or fixed container systems employing crew for mechanical or manual loading. The number of garbage vehicles is inadequate, and due to comparatively long haulage to disposal sites, there are certain problems of frequent vehicular maintenance and high fuel costs. Foreign investors have shown interest in enterprising improvement schemes and proposed operating a solid waste management system in Karachi. The waste to Energy option is being considered to provide a practical answer to be adopted to generate power and reduce waste load – a two-pronged solution for the increasing environmental problem. The paper presents results and analysis of a recent study into waste generation and characterization probing into waste-to-energy options for Karachi City.

Keywords: waste to energy option, integrated approach, solid waste management, physical and chemical composition of waste in Karachi

Procedia PDF Downloads 16
437 Trends in All-Cause Mortality and Inpatient and Outpatient Visits for Ambulatory Care Sensitive Conditions during the First Year of the COVID-19 Pandemic: A Population-Based Study

Authors: Tetyana Kendzerska, David T. Zhu, Michael Pugliese, Douglas Manuel, Mohsen Sadatsafavi, Marcus Povitz, Therese A. Stukel, Teresa To, Shawn D. Aaron, Sunita Mulpuru, Melanie Chin, Claire E. Kendall, Kednapa Thavorn, Rebecca Robillard, Andrea S. Gershon

Abstract:

The impact of the COVID-19 pandemic on the management of ambulatory care sensitive conditions (ACSCs) remains unknown. To compare observed and expected (projected based on previous years) trends in all-cause mortality and healthcare use for ACSCs in the first year of the pandemic (March 2020 - March 2021). A population-based study using provincial health administrative data.General adult population (Ontario, Canada). Monthly all-cause mortality, and hospitalizations, emergency department (ED) and outpatient visit rates (per 100,000 people at-risk) for seven combined ACSCs (asthma, COPD, angina, congestive heart failure, hypertension, diabetes, and epilepsy) during the first year were compared with similar periods in previous years (2016-2019) by fitting monthly time series auto-regressive integrated moving-average models. Compared to previous years, all-cause mortality rates increased at the beginning of the pandemic (observed rate in March-May 2020 of 79.98 vs. projected of 71.24 [66.35-76.50]) and then returned to expected in June 2020—except among immigrants and people with mental health conditions where they remained elevated. Hospitalization and ED visit rates for ACSCs remained lower than projected throughout the first year: observed hospitalization rate of 37.29 vs. projected of 52.07 (47.84-56.68); observed ED visit rate of 92.55 vs. projected of 134.72 (124.89-145.33). ACSC outpatient visit rates decreased initially (observed rate of 4,299.57 vs. projected of 5,060.23 [4,712.64-5,433.46]) and then returned to expected in June 2020. Reductions in outpatient visits for ACSCs at the beginning of the pandemic combined with reduced hospital admissions may have been associated with temporally increased mortality—disproportionately experienced by immigrants and those with mental health conditions. The Ottawa Hospital Academic Medical Organization

Keywords: COVID-19, chronic disease, all-cause mortality, hospitalizations, emergency department visits, outpatient visits, modelling, population-based study, asthma, COPD, angina, heart failure, hypertension, diabetes, epilepsy

Procedia PDF Downloads 77
436 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions

Authors: Varvara Roubtsova, Mohamed Chekired

Abstract:

Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.

Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics

Procedia PDF Downloads 287
435 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State

Authors: F. Mohammadsadeghi

Abstract:

Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.

Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine

Procedia PDF Downloads 391
434 The Turkish Version of the Carer’s Assessment of Satisfaction Index (CASI-TR): Its Cultural Adaptation, Validation, and Reliability

Authors: Cemile Kütmeç Yilmaz, Güler Duru Asiret, Gulcan Bagcivan

Abstract:

The aim of this study was to evaluate the reliability and validity of the Turkish version of the Carer’s Assessment of Satisfaction Index (CASI-TR). The study was conducted between the dates of June 2016 and September 2017 at the Training and Research Hospital of Aksaray University with the caregiving family members of the inpatients with chronic diseases. For this study, the sample size was calculated as at least 10 individuals for each item (item number (30)X10=300). The study sample included 300 caregiving family members, who provided primer care for at least three months for a patient (who had at least one chronic disease and received inpatient treatment in general internal medicine and palliative care units). Data were collected by using a demographic questionnaire and CASI-TR. Descriptive statistics, and psychometric tests were used for the data analysis. Of those caregivers, 76.7% were female, 86.3% were 65 years old and below, 43.7% were primary school graduates, 87% were married, 86% were not working, 66.3% were housewives, and 60.3% defined their income status as having an income covering one’s expenses. Care recipients often had problems in terms of walking, sleep, balance, feeding and urinary incontinence. The Cronbach Alpha value calculated for the CASI-TR (30 items) was 0,949. Internal consistency coefficients calculated for subscales were: 0.922 for the subscale of ‘caregiver satisfaction related to care recipient’, 0.875 for the subscale of ‘caregiver satisfaction related to themselves’, and 0.723 for the subscale of ‘dynamics of interpersonal relations’. Factor analysis revealed that three factors accounted for 57.67% of the total variance, with an eigenvalue of >1. assessed in terms of significance, we saw that the items came together in a significant manner. The factor load of the items were between 0.311 and 0.874. These results show that the CASI-TR is a valid and reliable scale. The adoption of the translated CASI in Turkey is found reliable and valid to assessing the satisfaction of caregivers. CASI-TR can be used easily in clinics or house visits by nurses and other health professionals for assessing caregiver satisfaction from caregiving.

Keywords: carer’s assessment of satisfaction index, caregiver, validity, reliability

Procedia PDF Downloads 191
433 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation

Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy

Abstract:

A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.

Keywords: cognitive activity, EEG, machine learning, personalized recovery

Procedia PDF Downloads 208
432 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 223
431 Chemically Enhanced Primary Treatment: Full Scale Trial Results Conducted at a South African Wastewater Works

Authors: Priyanka Govender, S. Mtshali, Theresa Moonsamy, Zanele Mkwanazi, L. Mthembu

Abstract:

Chemically enhanced primary treatment (CEPT) can be used at wastewater works to improve the quality of the final effluent discharge, provided that the plant has spare anaerobic digestion capacity. CEPT can transfer part of the organic load to the digesters thereby effectively relieving the hydraulic loading on the plant and in this way can allow the plant to continue operating long after the hydraulic capacity of the plant has been exceeded. This can allow a plant to continue operating well beyond its original design capacity, requiring only fairly simple and inexpensive modifications to the primary settling tanks as well as additional chemical costs, thereby delaying or even avoiding the need for expensive capital upgrades. CEPT can also be effective at plants where high organic loadings prevent the wastewater discharge from meeting discharge standards, especially in the case of COD, phosphates and suspended solids. By increasing removals of these pollutants in the primary settling tanks, CEPT can enable the plant to conform to specifications without the need for costly upgrades. Laboratory trials were carried out recently at the Umbilo WWTW in Durban and these were followed by a baseline assessment of the current plant performance and a subsequent full scale trial on the Conventional plant i.e. West Plant. The operating conditions of the plant are described and the improvements obtained in COD, phosphate and suspended solids, are discussed. The PST and plant overall suspended solids removal efficiency increased by approximately 6% during the trial. Details regarding the effect that CEPT had on sludge production and the digesters are also provided. The cost implications of CEPT are discussed in terms of capital costs as well as operation and maintenance costs and the impact of Ferric chloride on the infrastructure was also studied and found to be minimal. It was concluded that CEPT improves the final quality of the discharge effluent, thereby improving the compliance of this effluent with the discharge license. It could also allow for a delay in upgrades to the plant, allowing the plant to operate above its design capacity. This will be elaborated further upon presentation.

Keywords: chemically enhanced, ferric, wastewater, primary

Procedia PDF Downloads 283
430 Determination of the Relative Humidity Profiles in an Internal Micro-Climate Conditioned Using Evaporative Cooling

Authors: M. Bonello, D. Micallef, S. P. Borg

Abstract:

Driven by increased comfort standards, but at the same time high energy consciousness, energy-efficient space cooling has become an essential aspect of building design. Its aims are simple, aiming at providing satisfactory thermal comfort for individuals in an interior space using low energy consumption cooling systems. In this context, evaporative cooling is both an energy-efficient and an eco-friendly cooling process. In the past two decades, several academic studies have been performed to determine the resulting thermal comfort produced by an evaporative cooling system, including studies on temperature profiles, air speed profiles, effect of clothing and personnel activity. To the best knowledge of the authors, no studies have yet considered the analysis of relative humidity (RH) profiles in a space cooled using evaporative cooling. Such a study will determine the effect of different humidity levels on a person's thermal comfort and aid in the consequent improvement designs of such future systems. Under this premise, the research objective is to characterise the resulting different RH profiles in a chamber micro-climate using the evaporative cooling system in which the inlet air speed, temperature and humidity content are varied. The chamber shall be modelled using Computational Fluid Dynamics (CFD) in ANSYS Fluent. Relative humidity shall be modelled using a species transport model while the k-ε RNG formulation is the proposed turbulence model that is to be used. The model shall be validated with measurements taken using an identical test chamber in which tests are to be conducted under the different inlet conditions mentioned above, followed by the verification of the model's mesh and time step. The verified and validated model will then be used to simulate other inlet conditions which would be impractical to conduct in the actual chamber. More details of the modelling and experimental approach will be provided in the full paper The main conclusions from this work are two-fold: the micro-climatic relative humidity spatial distribution within the room is important to consider in the context of investigating comfort at occupant level; and the investigation of a human being's thermal comfort (based on Predicted Mean Vote – Predicted Percentage Dissatisfied [PMV-PPD] values) and its variation with different locations of relative humidity values. The study provides the necessary groundwork for investigating the micro-climatic RH conditions of environments cooled using evaporative cooling. Future work may also target the analysis of ways in which evaporative cooling systems may be improved to better the thermal comfort of human beings, specifically relating to the humidity content around a sedentary person.

Keywords: chamber micro-climate, evaporative cooling, relative humidity, thermal comfort

Procedia PDF Downloads 143
429 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building

Authors: Zehra Aybike Kılıç, Alpin Köknel Yener

Abstract:

Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.

Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort

Procedia PDF Downloads 161