Search results for: solution focused group therapy
1084 The Impact of Professional Development in the Area of Technology Enhanced Learning on Higher Education Teaching Practices Across Atlantic Technological University – Research Methodology and Preliminary Findings
Authors: Annette Cosgrove
Abstract:
The objectives of this research study is to examine the impact of professional development in Technology Enhanced Learning (TEL) and the digitisation of learning in teaching communities across multiple higher education sites in the ATU (Atlantic Technological University *) ( 2020-2025), including the proposal of an evidence based digital teaching model for use in a future pandemic. The research strategy undertaken for this PhD Study is a multi-site study using mixed methods. Qualitative & quantitative methods are being used in the study to collect data. A pilot study was carried out initially , feedback collected and the research instrument was edited to reflect this feedback, before being administered. The purpose of the staff questionnaire is to evaluate the impact of professional development in the area of TEL, and to capture the practitioners views on the perceived impact on their teaching practice in the higher education sector across ATU (West of Ireland – 5 Higher education locations ). The phenomenon being explored is ‘ the impact of professional development in the area of technology enhanced learning and on teaching practice in a higher education institution.’ The research methodology chosen for this study is an Action based Research Study. The researcher has chosen this approach as it is a prime strategy for developing educational theory and enhancing educational practice . This study includes quantitative and qualitative methods to elicit data which will quantify the impact that continuous professional development in the area of digital teaching practice and technologies has on the practitioner’s teaching practice in higher education. The research instruments / data collection tools for this study include a lecturer survey with a targeted TEL Practice group ( Pre and post covid experience) and semi-structured interviews with lecturers.. This research is currently being conducted across the ATU multisite campus and targeting Higher education lecturers that have completed formal CPD in the area of digital teaching. ATU, a west of Ireland university is the focus of the study , The research questionnaire has been deployed, with 75 respondents to date across the ATU - the primary questionnaire and semi- formal interviews are ongoing currently – the purpose being to evaluate the impact of formal professional development in the area of TEL and its perceived impact on the practitioners teaching practice in the area of digital teaching and learning . This paper will present initial findings, reflections and data from this ongoing research study.Keywords: TEL, DTL, digital teaching, digital assessment
Procedia PDF Downloads 701083 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants
Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji
Abstract:
Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization
Procedia PDF Downloads 1291082 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites
Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana
Abstract:
With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)
Procedia PDF Downloads 1161081 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates
Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe
Abstract:
Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.Keywords: machine learning, MTB, WGS, drug resistant TB
Procedia PDF Downloads 521080 Adjustment with Changed Lifestyle at Old Age Homes: A Perspective of Elderly in India
Authors: Priyanka V. Janbandhu, Santosh B. Phad, Dhananjay W. Bansod
Abstract:
The current changing scenario of the family is a compelling aged group not only to be alone in a nuclear family but also to join the old age institutions. The consequences of it are feeling of neglected or left alone by the children, adding a touch of helpless in the absence of lack of expected care and support. The accretion of all these feelings and unpleasant events ignite a question in their mind that – who is there for me? The efforts have taken to highlight the issues of the elderly after joining the old age home and their perception about the current life as an institutional inmate. This attempt to cover up the condition, adjustment, changed lifestyle and perspective in the association with several issues of the elderly, which have an essential effect on their well-being. The present research piece has collected the information about institutionalized elderly with the help of a semi-structured questionnaire. This study interviewed 500 respondents from 22 old age homes of Pune city of Maharashtra State, India. This data collection methodology consists of Multi-stage random sampling. In which the stratified random sampling adopted for the selection of old age homes and sample size determination, sample selection probability proportional to the size and simple random sampling techniques implemented. The study provides that around five percent of the elderly shifted to old age home along with their spouse, whereas ten percent of the elderly are staying away from their spouse. More than 71 percent of the elderly have children, and they are an involuntary inmate of the old age institution, even less than one-third of the elderly consulted to the institution before the joining it. More than sixty percent of the elderly have children, but they joined institution due to the unpleasant response of their children only. Around half of the elderly responded that there are issues while adjusting to this environment, many of them are still persistent. At least one elderly out of ten is there who is suffering from the feeling of loneliness and left out by children and other family members. In contrast, around 97 percent of the elderly are very happy or satisfied with the institutional facilities. It illustrates that the issues are associated with their children and other family members, even though they left their home before a year or more. When enquired about this loneliness feeling few of them are suffering from it before leaving their homes, it was due to lack of interaction with children, as they are too busy to have time for the aged parents. Additionally, the conflicts or fights within the family due to the presence of old persons in the family contributed to establishing another feeling of insignificance among the elderly parents. According to these elderly, have more than 70 percent of the share, the children are ready to spend money indirectly for us through these institutions, but not prepared to provide some time and very few amounts of all this expenditure directly for us.Keywords: elderly, old age homes, life style changes and adjustment, India
Procedia PDF Downloads 1341079 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 1411078 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel system is used to mix up low or high viscosities of different fluids, and the laminar flow with high-viscous water-glycerol fluids makes the mixing at the entrance Y-junction region a challenging issue. Acoustic streaming (AS) is time-average, a steady second-order flow phenomenon that could produce rolling motion in the microchannel by oscillating low-frequency range acoustic transducer by inducing acoustic wave in the flow field is the promising strategy to enhance diffusion mass transfer and mixing performance in laminar flow phenomena. In this study, the 3D trapezoidal Structure has been manufactured with advanced CNC machine cutting tools to produce the molds of trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm spine sharp-edge tip depth from PMMA glass (Polymethylmethacrylate) and the microchannel has been fabricated using PDMS (Polydimethylsiloxane) which could be grown-up longitudinally in Y-junction microchannel mixing region top surface to visualized 3D rolling steady acoustic streaming and mixing performance evaluation using high-viscous miscible fluids. The 3D acoustic streaming flow patterns and mixing enhancement were investigated using the micro-particle image velocimetry (μPIV) technique with different spine depth lengths, channel widths, high volume flow rates, oscillation frequencies, and amplitude. The velocity and vorticity flow fields show that a pair of 3D counter-rotating streaming vortices were created around the trapezoidal spine structure and observing high vorticity maps up to 8 times more than the case without acoustic streaming in Y-junction with the high-viscosity water-glycerol mixture fluids. The mixing experiments were performed by using fluorescent green dye solution with de-ionized water on one inlet side, de-ionized water-glycerol with different mass-weight percentage ratios on the other inlet side of the Y-channel and evaluated its performance with the degree of mixing at different amplitudes, flow rates, frequencies, and spine sharp-tip edge angles using the grayscale value of pixel intensity with MATLAB Software. The degree of mixing (M) characterized was found to significantly improved to 0.96.8% with acoustic streaming from 67.42% without acoustic streaming, in the case of 0.0986 μl/min flow rate, 12kHz frequency and 40V oscillation amplitude at y = 2.26 mm. The results suggested the creation of a new 3D steady streaming rolling motion with a high volume flow rate around the entrance junction mixing region, which promotes the mixing of two similar high-viscosity fluids inside the microchannel, which is unable to mix by the laminar flow with low viscous conditions.Keywords: nano fabrication, 3D acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 321077 Effect of Phenolic Acids on Human Saliva: Evaluation by Diffusion and Precipitation Assays on Cellulose Membranes
Authors: E. Obreque-Slier, F. Orellana-Rodríguez, R. López-Solís
Abstract:
Phenolic compounds are secondary metabolites present in some foods, such as wine. Polyphenols comprise two main groups: flavonoids (anthocyanins, flavanols, and flavonols) and non-flavonoids (stilbenes and phenolic acids). Phenolic acids are low molecular weight non flavonoid compounds that are usually grouped into benzoic (gallic, vanillinic and protocatechuic acids) and cinnamic acids (ferulic, p-coumaric and caffeic acids). Likewise, tannic acid is an important polyphenol constituted mainly by gallic acid. Phenolic compounds are responsible for important properties in foods and drinks, such as color, aroma, bitterness, and astringency. Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. Astringency perception has been associated with interactions between flavanols present in some foods and salivary proteins. Despite the quantitative relevance of phenolic acids in food and beverages, there is no information about its effect on salivary proteins and consequently on the sensation of astringency. The objective of this study was assessed the interaction of several phenolic acids (gallic, vanillinic, protocatechuic, ferulic, p-coumaric and caffeic acids) with saliva. Tannic acid was used as control. Thus, solutions of each phenolic acids (5 mg/mL) were mixed with human saliva (1:1 v/v). After incubation for 5 min at room temperature, 15-μL aliquots of the mixtures were dotted on a cellulose membrane and allowed to diffuse. The dry membrane was fixed in 50 g/L trichloroacetic acid, rinsed in 800 mL/L ethanol and stained for protein with Coomassie blue for 20 min, destained with several rinses of 73 g/L acetic acid and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged and fifteen-μL aliquots of each supernatant were dotted on a cellulose membrane, allowed to diffuse and processed for protein staining, as indicated above. In this latter assay, reduced protein staining was taken as indicative of protein precipitation (precipitation test). The diffusion of the salivary protein was restricted by the presence of each phenolic acids (anti-diffusive effect), while tannic acid did not alter diffusion of the salivary protein. By contrast, phenolic acids did not provoke precipitation of the salivary protein, while tannic acid produced precipitation of salivary proteins. In addition, binary mixtures (mixtures of two components) of various phenolic acids with gallic acid provoked a restriction of saliva. Similar effect was observed by the corresponding individual phenolic acids. Contrary, binary mixtures of phenolic acid with tannic acid, as well tannic acid alone, did not affect the diffusion of the saliva but they provoked an evident precipitation. In summary, phenolic acids showed a relevant interaction with the salivary proteins, thus suggesting that these wine compounds can also contribute to the sensation of astringency.Keywords: astringency, polyphenols, tannins, tannin-protein interaction
Procedia PDF Downloads 2461076 Development of Tutorial Courseware on Selected Topics in Mathematics, Science and the English Language
Authors: Alice D. Dioquino, Olivia N. Buzon, Emilio F. Aguinaldo, Ruel Avila, Erwin R. Callo, Cristy Ocampo, Malvin R. Tabajen, Marla C. Papango, Marilou M. Ubina, Josephine Tondo, Cromwell L. Valeriano
Abstract:
The main purpose of this study was to develop, evaluate and validate courseware on Selected Topics in Mathematics, Science, and the English Language. Specifically, it aimed to: 1. Identify the appropriate Instructional Systems Design (ISD) model in the development of the courseware material; 2. Assess the courseware material according to its: a. Content Characteristics; b. Instructional Characteristics; and c. Technical Characteristics 3. Find out if there is a significant difference in the performance of students before and after using the tutorial CAI. This research is developmental as well as a one group pretest-posttest design. The study had two phases. Phase I includes the needs analysis, writing of lessons and storyboard by the respective experts in each field. Phase II includes the digitization or the actual development of the courseware by the faculty of the ICT department. In this phase it adapted an instructional systems design (ISD) model which is the ADDIE model. ADDIE stands for Analysis, Design, Development, Implementation and Evaluation. Formative evaluation was conducted simultaneously with the different phases to detect and remedy any bugs in the courseware along the areas of content, instructional and technical characteristics. The expected output are the digitized lessons in Algebra, Biology, Chemistry, Physics and Communication Arts in English. Students and some IT experts validated the CAI material using the Evaluation Form by Wong & Wong. They validated the CAI materials as Highly Acceptable with an overall mean rating of 4.527and standard deviation of 0 which means that they were one in the ratings they have given the CAI materials. A mean gain was recorded and computing the t-test for dependent samples it showed that there were significant differences in the mean achievement of the students before and after the treatment (using CAI). The identified ISD model used in the development of the tutorial courseware was the ADDIE model. The quantitative analyses of data based on ratings given by the respondents’ shows that the tutorial courseware possess the characteristics and or qualities of a very good computer-based courseware. The ratings given by the different evaluators with regard to content, instructional, and technical aspects of the Tutorial Courseware are in conformity towards being excellent. Students performed better in mathematics, biology chemistry, physics and the English Communication Arts after they were exposed to the tutorial courseware.Keywords: CAI, tutorial courseware, Instructional Systems Design (ISD) Model, education
Procedia PDF Downloads 3461075 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 1621074 Females’ Usage Patterns of Information and Communication Technologies (ICTs) in the Vhembe District, South Africa
Authors: Fulufhelo Oscar Maphiri-Makananise
Abstract:
The main purpose of this paper is to explore and provide substantiated evidence based on the usage patterns of Information and Communication Technologies (ICTs) by females in the Vhembe District in Limpopo-Province, South Africa. The study presents a broader picture and understanding about the usage of ICTs from female’s perspective. The significance of this study stems from the need to discover the role, relevance and usage patterns of ICTs such as smartphones, computers, laptops, and iPods, internet and social networking sites among females following the trends of new media technologies in the society. The main objective of the study was to investigate the usability and accessibility of ICTs to empower the Vhembe District females in South Africa. The study used quantitative research method together with elements of qualitative research to determine the major ideas, perceptions and usage patterns of ICTs by females in the District. Data collection involved structured and self-administered questionnaire with both closed-ended and open-ended questions. Two groups of respondents participated in this study. Media Studies female students (n=50) at the University of Venda provided their ideas and perceptions about the usefulness and usage patterns of ICTs such as smartphones, internet and computers at the university level, while the second group were (n=50) Makhado comprehensive school learners who also provided their perceptions and ideas about the use of ICTs at the high school level. Also, the study provides a more balanced, accurate and rational results on the pertinent issues that concern the use of ICTs by females in the Vhembe District. The researcher also believes that the findings of the study are useful as a guideline and model for ICT intervention that work as an empowerment to women in South Africa. The study showed that the main purpose of using ICTs by females was to search information for writing assignments, conducting research, dating, exchanging ideas and networking with friends and relatives that are also members of social networking sites and maintaining existing friends in real life. The study further revealed that most females were using ICTs for social purposes and accessing the internet than entertaining themselves. The finding also indicated a high number of females that used ICTs for e-learning (62%) and social purposes (85%). Moreover, the study centred on providing strong insightful information on the females’ usage patterns and their perceptions of ICTs in the Vhembe district of Limpopo province.Keywords: female users, information and communication technologies, internet, usage patterns
Procedia PDF Downloads 2151073 Clinically-Based Improvement Project Focused on Reducing Risks Associated with Diabetes Insipidus, Syndrome of Inappropriate ADH, and Cerebral Salt Wasting in Paediatric Post-Neurosurgical and Traumatic Brain Injury Patients
Authors: Shreya Saxena, Felix Miller-Molloy, Phillipa Bowen, Greg Fellows, Elizabeth Bowen
Abstract:
Background: Complex fluid balance abnormalities are well-established post-neurosurgery and traumatic brain injury (TBI). The triple-phase response requires fluid management strategies reactive to urine output and sodium homeostasis as patients shift between Diabetes Insipidus (DI) and Syndrome of Inappropriate ADH (SIADH). It was observed, at a tertiary paediatric center, a relatively high prevalence of the above complications within a cohort of paediatric post-neurosurgical and TBI patients. An audit of the clinical practice against set institutional guidelines was undertaken and analyzed to understand why this was occurring. Based on those results, new guidelines were developed with structured educational packages for the specialist teams involved. This was then reaudited, and the findings were compared. Methods: Two independent audits were conducted across two time periods, pre and post guideline change. Primary data was collected retrospectively, including both qualitative and quantitative data sets from the CQUIN neurosurgical database and electronic medical records. All paediatric patients post posterior fossa (PFT) or supratentorial surgery or with a TBI were included. A literature review of evidence-based practice, initial audit data, and stakeholder feedback was used to develop new clinical guidelines and nursing standard operation procedures. Compliance against these newly developed guidelines was re-assessed and a thematic, trend-based analysis of the two sets of results was conducted. Results: Audit-1 January2017-June2018, n=80; Audit-2 January2020-June2021, n=30 (reduced operative capacity due to COVID-19 pandemic). Overall, improvements in the monitoring of both fluid balance and electrolyte trends were demonstrated; 51% vs. 77% and 78% vs. 94%, respectively. The number of clear fluid management plans documented postoperatively also increased (odds ratio of 4), leading to earlier recognition and management of evolving fluid-balance abnormalities. The local paediatric endocrine team was involved in the care of all complex cases and notified sooner for those considered to be developing DI or SIADH (14% to 35%). However, significant Na fluctuations (>12mmol in 24 hours) remained similar – 5 vs six patients – found to be due to complex pituitary hypothalamic pathology – and the recommended adaptive fluid management strategy was still not always used. Qualitative data regarding useability and understanding of fluid-balance abnormalities and the revised guidelines were obtained from health professionals via surveys and discussion in the specialist teams providing care. The feedback highlighted the new guidelines provided a more consistent approach to the post-operative care of these patients and was a better platform for communication amongst the different specialist teams involved. The potential limitation to our study would be the small sample size on which to conduct formal analyses; however, this reflects the population that we were investigating, which we cannot control. Conclusion: The revised clinical guidelines, based on audited data, evidence-based literature review and stakeholder consultations, have demonstrated an improvement in understanding of the neuro-endocrine complications that are possible, as well as increased compliance to post-operative monitoring of fluid balance and electrolytes in this cohort of patients. Emphasis has been placed on preventative rather than treatment of DI and SIADH. Consequently, this has positively impacted patient safety for the center and highlighted the importance of educational awareness and multi-disciplinary team working.Keywords: post-operative, fluid-balance management, neuro-endocrine complications, paediatric
Procedia PDF Downloads 921072 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management
Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing
Abstract:
Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.Keywords: digital twin, infrastructure asset management, maturity model, smart city
Procedia PDF Downloads 1571071 Knowledge Management in Public Sector Employees: A Case Study of Training Participants at National Institute of Management, Pakistan
Authors: Muhammad Arif Khan, Haroon Idrees, Imran Aziz, Sidra Mushtaq
Abstract:
The purpose of this study is to investigate the current level of knowledge mapping skills of the public sector employees in Pakistan. National Institute of Management is one of the premiere public sector training organization for mid-career public sector employees in Pakistan. This study is conducted on participants of fourteen weeks long training course called Mid-Career Management Course (MCMC) which is mandatory for public sector employees in order to ascertain how to enhance their knowledge mapping skills. Methodology: Researcher used both qualitative and quantitative approach to conduct this study. Primary data about current level of participants’ understanding of knowledge mapping was collected through structured questionnaire. Later on, Participant Observation method was used where researchers acted as part of the group to gathered data from the trainees during their performance in training activities and tasks. Findings: Respondents of the study were examined for skills and abilities to organizing ideas, helping groups to develop conceptual framework, identifying critical knowledge areas of an organization, study large networks and identifying the knowledge flow using nodes and vertices, visualizing information, represent organizational structure etc. Overall, the responses varied in different skills depending on the performance and presentations. However, generally all participants have demonstrated average level of using both the IT and Non-IT K-mapping tools and techniques during simulation exercises, analysis paper de-briefing, case study reports, post visit presentation, course review, current issue presentation, syndicate meetings, and daily synopsis. Research Limitations: This study is conducted on a small-scale population of 67 public sector employees nominated by federal government to undergo 14 weeks extensive training program called MCMC (Mid-Career Management Course) at National Institute of Management, Peshawar, Pakistan. Results, however, reflects only a specific class of public sector employees i.e. working in grade 18 and having more than 5 years of work. Practical Implications: Research findings are useful for trainers, training agencies, government functionaries, and organizations working for capacity building of public sector employees.Keywords: knowledge management, km in public sector, knowledge management and professional development, knowledge management in training, knowledge mapping
Procedia PDF Downloads 2541070 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 3431069 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator
Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani
Abstract:
During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA
Procedia PDF Downloads 1861068 Indirect Genotoxicity of Diesel Engine Emission: An in vivo Study Under Controlled Conditions
Authors: Y. Landkocz, P. Gosset, A. Héliot, C. Corbière, C. Vendeville, V. Keravec, S. Billet, A. Verdin, C. Monteil, D. Préterre, J-P. Morin, F. Sichel, T. Douki, P. J. Martin
Abstract:
Air Pollution produced by automobile traffic is one of the main sources of pollutants in urban atmosphere and is largely due to exhausts of the diesel engine powered vehicles. The International Agency for Research on Cancer, which is part of the World Health Organization, classified in 2012 diesel engine exhaust as carcinogenic to humans (Group 1), based on sufficient evidence that exposure is associated with an increased risk for lung cancer. Amongst the strategies aimed at limiting exhausts in order to take into consideration the health impact of automobile pollution, filtration of the emissions and use of biofuels are developed, but their toxicological impact is largely unknown. Diesel exhausts are indeed complex mixtures of toxic substances difficult to study from a toxicological point of view, due to both the necessary characterization of the pollutants, sampling difficulties, potential synergy between the compounds and the wide variety of biological effects. Here, we studied the potential indirect genotoxicity of emission of Diesel engines through on-line exposure of rats in inhalation chambers to a subchronic high but realistic dose. Following exposure to standard gasoil +/- rapeseed methyl ester either upstream or downstream of a particle filter or control treatment, rats have been sacrificed and their lungs collected. The following indirect genotoxic parameters have been measured: (i) telomerase activity and telomeres length associated with rTERT and rTERC gene expression by RT-qPCR on frozen lungs, (ii) γH2AX quantification, representing double-strand DNA breaks, by immunohistochemistry on formalin fixed-paraffin embedded (FFPE) lung samples. These preliminary results will be then associated with global cellular response analyzed by pan-genomic microarrays, monitoring of oxidative stress and the quantification of primary DNA lesions in order to identify biological markers associated with a potential pro-carcinogenic response of diesel or biodiesel, with or without filters, in a relevant system of in vivo exposition.Keywords: diesel exhaust exposed rats, γH2AX, indirect genotoxicity, lung carcinogenicity, telomerase activity, telomeres length
Procedia PDF Downloads 3901067 Recycling the Lanthanides from Permanent Magnets by Electrochemistry in Ionic Liquid
Authors: Celine Bonnaud, Isabelle Billard, Nicolas Papaiconomou, Eric Chainet
Abstract:
Thanks to their high magnetization and low mass, permanent magnets (NdFeB and SmCo) have quickly became essential for new energies (wind turbines, electrical vehicles…). They contain large quantities of neodymium, samarium and dysprosium, that have been recently classified as critical elements and that therefore need to be recycled. Electrochemical processes including electrodissolution followed by electrodeposition are an elegant and environmentally friendly solution for the recycling of such lanthanides contained in permanent magnets. However, electrochemistry of the lanthanides is a real challenge as their standard potentials are highly negative (around -2.5V vs ENH). Consequently, non-aqueous solvents are required. Ionic liquids (IL) are novel electrolytes exhibiting physico-chemical properties that fulfill many requirements of the sustainable chemistry principles, such as extremely low volatility and non-flammability. Furthermore, their chemical and electrochemical properties (solvation of metallic ions, large electrochemical windows, etc.) render them very attractive media to implement alternative and sustainable processes in view of integrated processes. All experiments that will be presented were carried out using butyl-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide. Linear sweep, cyclic voltammetry and potentiostatic electrochemical techniques were used. The reliability of electrochemical experiments, performed without glove box, for the classic three electrodes cell used in this study has been assessed. Deposits were obtained by chronoamperometry and were characterized by scanning electron microscopy and energy-dispersive X-ray spectroscopy. The IL cathodic behavior under different constraints (argon, nitrogen, oxygen atmosphere or water content) and using several electrode materials (Pt, Au, GC) shows that with argon gas flow and gold as a working electrode, the cathodic potential can reach the maximum value of -3V vs Fc+/Fc; thus allowing a possible reduction of lanthanides. On a gold working electrode, the reduction potential of samarium and neodymium was found to be -1.8V vs Fc+/Fc while that of dysprosium was -2.1V vs Fc+/Fc. The individual deposits obtained were found to be porous and presented some significant amounts of C, N, F, S and O atoms. Selective deposition of neodymium in presence of dysprosium was also studied and will be discussed. Next, metallic Sm, Nd and Dy electrodes were used in replacement of Au, which induced changes in the reduction potential values and the deposit structures of lanthanides. The individual corrosion potentials were also measured in order to determine the parameters influencing the electrodissolution of these metals. Finally, a full recycling process was investigated. Electrodissolution of a real permanent magnet sample was monitored kinetically. Then, the sequential electrodeposition of all lanthanides contained in the IL was investigated. Yields, quality of the deposits and consumption of chemicals will be discussed in depth, in view of the industrial feasibility of this process for real permanent magnets recycling.Keywords: electrodeposition, electrodissolution, ionic liquids, lanthanides, rcycling
Procedia PDF Downloads 2741066 Maker Education as Means for Early Entrepreneurial Education: Evaluation Results from a European Pilot Action
Authors: Elisabeth Unterfrauner, Christian Voigt
Abstract:
Since the foundation of the first Fab Lab by the Massachusetts Institute of Technology about 17 years ago, the Maker movement has spread globally with the foundation of maker spaces and Fab Labs worldwide. In these workshops, citizens have access to digital fabrication technologies such as 3D printers and laser cutters to develop and test their own ideas and prototypes, which makes it an attractive place for start-up companies. Know-How is shared not only in the physical space but also online in diverse communities. According to the Horizon report, the Maker movement, however, will also have an impact on educational settings in the following years. The European project ‘DOIT - Entrepreneurial skills for young social innovators in an open digital world’ has incorporated key elements of making to develop an early entrepreneurial education program for children between the age of six and 16. The Maker pedagogy builds on constructive learning approaches, learning by doing principles, learning in collaborative and interdisciplinary teams and learning through trial and error where mistakes are acknowledged as learning opportunities. The DOIT program consists of seven consecutive elements. It starts with a motivation phase where students get motivated by envisioning the scope of their possibilities. The second step is about Co-design: Students are asked to collect and select potential ideas for innovations. In the Co-creation phase students gather in teams and develop first prototypes of their ideas. In the iteration phase, the prototype is continuously improved and in the next step, in the reflection phase, feedback on the prototypes is exchanged between the teams. In the last two steps, scaling and reaching out, the robustness of the prototype is tested with a bigger group of users outside of the educational setting and finally students will share their projects with a wider public. The DOIT program involves 1,000 children in two pilot phases at 11 pilot sites in ten different European countries. The comprehensive evaluation design is based on a mixed method approach with a theoretical backbone on Lackeus’ model of entrepreneurship education, which distinguishes between entrepreneurial attitudes, entrepreneurial skills and entrepreneurial knowledge. A pre-post-test with quantitative measures as well as qualitative data from interviews with facilitators, students and workshop protocols will reveal the effectiveness of the program. The evaluation results will be presented at the conference.Keywords: early entrepreneurial education, Fab Lab, maker education, Maker movement
Procedia PDF Downloads 1321065 The Impact of Floods and Typhoons on Housing Welfare: Case Study of Thua Thien Hue Province, Vietnam
Authors: Seyeon Lee, Suyeon Lee, Julia Rogers
Abstract:
This research investigates and records post-flood and typhoon conditions of low income housing in the Thua Thien Hue Province, Vietnam; area prone to extreme flooding in Central Vietnam. The cost of rebuilding houses after flood and typhoon has been always a burden for low income households. These costs often lead to the elimination of essential construction practices for disaster resistance. Despite relief efforts from international non-profit organizations and Vietnam government, the impacts of flood and typhoon damages to residential construction has been reoccurring to the same neighborhood annually. Notwithstanding its importance, this topic has not been systematically investigated. The study is limited to assistance provided to low income households documenting existing conditions of low income homes impacted by post flood and typhoon conditions in the Thua Thien Hue Province. The research identifies leading causes of the building failure from the natural disasters. Relief efforts and progress made since the last typhoon is documented. The quality of construction and repairs are assessed based on Home Builders Guide to Coastal Construction by Federal Emergency Management Agency. Focus group discussions and individual interviews with local residents from four different communities were conducted to get incites on repair effort by the non-profit organizations and Vietnam government, and their needs post flood and typhoon. The findings from the field study informed that many of the local people are now aware of the importance of improving housing conditions as one of the key coping strategies to withstand flood and typhoon events as it makes housing and community more resilient to future events. While there has been a remarkable improvement of housing and infrastructure with the support from the local government as well as the non-profit organizations, many households in the study areas are found to still live in weak and fragile housing conditions without gaining access to the aid to repair and strengthen the houses. Given that the major immediate recovery action taken by the local people tends to focus on repairing damaged houses, and on this ground, low-income households spend a considerable amount of their income on housing repair, providing proper and applicable construction practices will not only improve the housing condition, but also contribute to reducing poverty in Vietnam.Keywords: disaster coping mechanism, housing welfare, low-income housing, recovery reduction
Procedia PDF Downloads 2711064 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 281063 Investigation of Delamination Process in Adhesively Bonded Hardwood Elements under Changing Environmental Conditions
Authors: M. M. Hassani, S. Ammann, F. K. Wittel, P. Niemz, H. J. Herrmann
Abstract:
Application of engineered wood, especially in the form of glued-laminated timbers has increased significantly. Recent progress in plywood made of high strength and high stiffness hardwoods, like European beech, gives designers in general more freedom by increased dimensional stability and load-bearing capacity. However, the strong hygric dependence of basically all mechanical properties renders many innovative ideas futile. The tendency of hardwood for higher moisture sorption and swelling coefficients lead to significant residual stresses in glued-laminated configurations, cross-laminated patterns in particular. These stress fields cause initiation and evolution of cracks in the bond-lines resulting in: interfacial de-bonding, loss of structural integrity, and reduction of load-carrying capacity. Subsequently, delamination of glued-laminated timbers made of hardwood elements can be considered as the dominant failure mechanism in such composite elements. In addition, long-term creep and mechano-sorption under changing environmental conditions lead to loss of stiffness and can amplify delamination growth over the lifetime of a structure even after decades. In this study we investigate the delamination process of adhesively bonded hardwood (European beech) elements subjected to changing climatic conditions. To gain further insight into the long-term performance of adhesively bonded elements during the design phase of new products, the development and verification of an authentic moisture-dependent constitutive model for various species is of great significance. Since up to now, a comprehensive moisture-dependent rheological model comprising all possibly emerging deformation mechanisms was missing, a 3D orthotropic elasto-plastic, visco-elastic, mechano-sorptive material model for wood, with all material constants being defined as a function of moisture content, was developed. Apart from the solid wood adherends, adhesive layer also plays a crucial role in the generation and distribution of the interfacial stresses. Adhesive substance can be treated as a continuum layer constructed from finite elements, represented as a homogeneous and isotropic material. To obtain a realistic assessment on the mechanical performance of the adhesive layer and a detailed look at the interfacial stress distributions, a generic constitutive model including all potentially activated deformation modes, namely elastic, plastic, and visco-elastic creep was developed. We focused our studies on the three most common adhesive systems for structural timber engineering: one-component polyurethane adhesive (PUR), melamine-urea-formaldehyde (MUF), and phenol-resorcinol-formaldehyde (PRF). The corresponding numerical integration approaches, with additive decomposition of the total strain are implemented within the ABAQUS FEM environment by means of user subroutine UMAT. To predict the true stress state, we perform a history dependent sequential moisture-stress analysis using the developed material models for both wood substrate and adhesive layer. Prediction of the delamination process is founded on the fracture mechanical properties of the adhesive bond-line, measured under different levels of moisture content and application of the cohesive interface elements. Finally, we compare the numerical predictions with the experimental observations of de-bonding in glued-laminated samples under changing environmental conditions.Keywords: engineered wood, adhesive, material model, FEM analysis, fracture mechanics, delamination
Procedia PDF Downloads 4361062 The Opinions of Counselor Candidates' regarding Universal Values in Marriage Relationship
Authors: Seval Kizildag, Ozge Can Aran
Abstract:
The effective intervention of counselors’ in conflict between spouses may be effective in increasing the quality of marital relationship. At this point, it is necessary for counselors to consider their own value systems at first and then reflect this correctly to the counseling process. For this reason, it is primarily important to determine the needs of counselors. Starting from this point of view, in this study, it is aimed to reveal the perspective of counselor candidates about the universal values in marriage relation. The study group of the survey was formed by sampling, which is one of the prospective sampling methods. As a criterion being a candidate for counseling area and having knowledge of the concepts of the Marriage and Family Counseling course is based, because, that candidate students have a comprehensive knowledge of the field and that students have mastered the concepts of marriage and family counseling will strengthen the findings of this study. For this reason, 61 counselor candidates, 32 (52%) female and 29 (48%) male counselor candidates, who were about to graduate from a university in south-east Turkey and who took a Marriage and Family Counseling course, voluntarily participated in the study. The average age of counselor candidates’ is 23. At the same time, 70 % of the parents of these candidates brought about their marriage through arranged marriage, 13% through flirting, 8% by relative marriage, 7% through friend circles and 2% by custom. The data were collected through Demographic Information Form and a form titled ‘Universal Values Form in Marriage’ which consists of six questions prepared by researchers. After the data were transferred to the computer, necessary statistical evaluations were made on the data. The qualitative data analysis was used on the data which was obtained in the study. The universal values which include six basic values covering trustworthiness, respect, responsibility, fairness, caring, citizenship, determined under the name as ‘six pillar of character’ are used as base and frequency values of the data were calculated trough content analysis. According to the findings of the study, while the value which most students find the most important value in marriage relation is being reliable, the value which they find the least important is to have citizenship consciousness. Also in this study, it is found out that counselor candidates associate the value of being trustworthiness ‘loyalty’ with (33%) as the highest in terms of frequency, the value of being respect ‘No violence’ with (23%), the value of responsibility ‘in the context of gender roles and spouses doing their owns’ with (35%) the value of being fairness ‘impartiality’ with (25%), the value of being caring ‘ being helpful’ with (25%) and finally as to the value of citizenship ‘love of country’ with (14%) and’ respect for the laws ‘ with (14%). It is believed that these results of the study will contribute to the arrangements for the development of counseling skills for counselor candidates regarding value in marriage and family counseling curricula.Keywords: caring, citizenship, counselor candidate, fairness, marriage relationship, respect, responsibility, trustworthiness, value system
Procedia PDF Downloads 2711061 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 1151060 Developing City-Level Sustainability Indicators in the Mena Region with the Case of Benghazi and Amman
Authors: Serag El Hegazi
Abstract:
The development of an assessment methodological framework for local and institutional sustainability is a key factor for future development plans and visions. This paper develops an approach to local and institutional sustainability assessment (ALISA). The ALISA methodology is a methodological framework that assists in the clarification, formulation, preparation, selection, and ranking of key indicators to facilitate the assessment of the level of sustainability at the local and institutional levels in North African and Middle Eastern cities. According to the literature review, this paper formulates a methodological framework, ALISA, which is a combination of the UNCSD (2001) Theme Indicators Framework and the issue-based Framework illustrated by McLaren (1996). The methodological framework has been implemented to formulate, select, and prioritise key indicators that most directly reflect the issues of a case study at the local community and institutional level. Yet, in the meantime, there is a lack of clear indicators and frameworks that can be developed to apply successfully at the local and institutional levels in the MENA Region, particularly in the cities of Benghazi and Amman. This is an essential issue for sustainability development estimation. Therefore, a conceptual framework was developed to be tested as a methodology to collect and classify data. The Approach to Local and Institutional Sustainability Assessment (ALISA) is a methodological framework that was developed to apply to certain cities in the MENA region. The main goal is to develop the ALISA framework to formulate, choose, and prioritize sustainability key indicators, which then can assist in guiding an assessment progress to improve decisions and policymakers towards the development of sustainable cities at the local and institutional level in the city of Benghazi. The conceptual, methodological framework, which supports this research with joint documentary and analysed data in two case studies, including focus-group discussions, semi-structured interviews, and questionnaires, reflects the approach required to develop a combined framework that assists the development of sustainability indicators. To achieve this progress and reach the aim of this paper, which is developing a practical approach for sustainability indicators framework that could be used as a tool to develop local and institutional sustainability indicators, appropriate stages must be applied to propose a set of local and institutional sustainability indicators as follows: Step one: issues clarifications, Step two: objectives formation/analysing of issues and boundaries, Step three: indicators preparation, First list of proposed indictors, Step four: indicator selection, Step five: indicator rating/ranking.Keywords: sustainability indicators, approach to local and institutional level, ALISA, policymakers
Procedia PDF Downloads 211059 Hawaii, Colorado, and Netherlands: A Comparative Analysis of the Respective Space Sectors
Authors: Mclee Kerolle
Abstract:
For more than 50 years, the state of Hawaii has had the beginnings of a burgeoning commercial aerospace presence statewide. While Hawaii provides the aerospace industry with unique assets concerning geographic location, lack of range safety issues and other factors critical to aerospace development, Hawaii’s strategy and commitment for aerospace have been unclear. For this reason, this paper presents a comparative analysis of Hawaii’s space sector with two of the world’s leading space sectors, Colorado and the Netherlands, in order to provide a strategic plan that establishes a firm position going forward to support Hawaii’s aerospace development statewide. This plan will include financial and other economic incentives legislatively supported by the State to help grow and diversify Hawaii’s aerospace sector. The first part of this paper will examine the business model adopted by the Colorado Space Coalition (CSC), a group of industry stakeholders working to make Colorado a center of excellence for aerospace, as blueprint for growth in Hawaii’s space sector. The second section of this paper will examine the business model adopted by the Netherlands Space Business Incubation Centre (NSBIC), a European Space Agency (ESA) affiliated program that offers business support for entrepreneurs to turn space-connected business ideas into commercial companies. This will serve as blueprint to incentivize space businesses to launch and develop in Hawaii. The third section of this paper will analyze the current policies both CSC, and NSBIC implores to promote industry expansion and legislative advocacy. The final section takes the findings from both space sectors and applies their most adaptable features to a Hawaii specific space business model that takes into consideration the unique advantage and disadvantages found in developing Hawaii’s space sector. The findings of this analysis will show that the development of a strategic plan based on a comparative analysis that creates high technology jobs and new pathways for a trained workforce in the space sector, as well as elicit state support and direction, will achieve the goal of establishing Hawaii as a center of space excellence. This analysis will also serve as a signal to the federal, private sector and international community that Hawaii is indeed serious about developing its’ aerospace industry. Ultimately this analysis and subsequent aerospace development plan will serve as a blueprint for the benefit of all space-faring nations seeking to develop their space sectors.Keywords: Colorado, Hawaii, Netherlands, space policy
Procedia PDF Downloads 1691058 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 611057 Optimization of Cobalt Oxide Conversion to Co-Based Metal-Organic Frameworks
Authors: Aleksander Ejsmont, Stefan Wuttke, Joanna Goscianska
Abstract:
Gaining control over particle shape, size and crystallinity is an ongoing challenge for many materials. Especially metalorganic frameworks (MOFs) are recently widely studied. Besides their remarkable porosity and interesting topologies, morphology has proven to be a significant feature. It can affect the further material application. Thus seeking new approaches that enable MOF morphology modulation is important. MOFs are reticular structures, where building blocks are made up of organic linkers and metallic nodes. The most common strategy of ensuring metal source is using salts, which usually exhibit high solubility and hinder morphology control. However, there has been a growing interest in using metal oxides as structure-directing agents towards MOFs due to their very low solubility and shape preservation. Metal oxides can be treated as a metal reservoir during MOF synthesis. Up to now, reports in which receiving MOFs from metal oxides mostly present ZnO conversion to ZIF-8. However, there are other oxides, for instance, Co₃O₄, which often is overlooked due to their structural stability and insolubility in aqueous solutions. Cobalt-based materials are famed for catalytic activity. Therefore the development of their efficient synthesis is worth attention. In the presented work, an optimized Co₃O₄transition to Co-MOFviaa solvothermal approach was proposed. The starting point of the research was the synthesis of Co₃O₄ flower petals and needles under hydrothermal conditions using different cobalt salts (e.g., cobalt(II) chloride and cobalt(II) nitrate), in the presence of urea, and hexadecyltrimethylammonium bromide (CTAB) surfactant as a capping agent. After receiving cobalt hydroxide, the calcination process was performed at various temperatures (300–500 °C). Then cobalt oxides as a source of cobalt cations were subjected to reaction with trimesic acid in solvothermal environment and temperature of 120 °C leading to Co-MOF fabrication. The solution maintained in the system was a mixture of water, dimethylformamide, and ethanol, with the addition of strong acids (HF and HNO₃). To establish how solvents affect metal oxide conversion, several different solvent ratios were also applied. The materials received were characterized with analytical techniques, including X-ray powder diffraction, energy dispersive spectroscopy,low-temperature nitrogen adsorption/desorption, scanning, and transmission electron microscopy. It was confirmed that the synthetic routes have led to the formation of Co₃O₄ and Co-based MOF varied in shape and size of particles. The diffractograms showed receiving crystalline phase for Co₃O₄, and also for Co-MOF. The Co₃O₄ obtained from nitrates and with using low-temperature calcination resulted in smaller particles. The study indicated that cobalt oxide particles of different size influence the efficiency of conversion and morphology of Co-MOF. The highest conversion was achieved using metal oxides with small crystallites.Keywords: Co-MOF, solvothermal synthesis, morphology control, core-shell
Procedia PDF Downloads 1621056 Traumatic Brain Injury in Cameroon: A Prospective Observational Study in a Level 1 Trauma Centre
Authors: Franklin Chu Buh, Irene Ule Ngole Sumbele, Andrew I. R. Maas, Mathieu Motah, Jogi V. Pattisapu, Eric Youm, Basil Kum Meh, Firas H. Kobeissy, Kevin W. Wang, Peter J. A. Hutchinson, Germain Sotoing Taiwe
Abstract:
Introduction: Studying TBI characteristics and their relation to outcomes can identify initiatives to improve TBI prevention and care. The objective of this study was to define the features and outcomes of TBI patients seen over a 1-year period in a level-I trauma center in Cameroon. Methods: Data on demographics, causes, injury mechanisms, clinical aspects, and discharge status were prospectively collected over a period of 12 months. The Glasgow Outcome Scale-Extended (GOSE) and the Quality of Life Questionnaire after Brain Injury (QoLIBRI) were used to evaluate outcomes 6-months after TBI. Categorical variables were described as frequencies and percentages. Comparisons between 2 categorical variables were done using Pearson's Chi-square test or Fisher's exact test. Results: A total of 160 TBI patients participated in the study. The age group 15-45 years (78%; 125) was most represented. Males were more affected (90%; 144). Low educational level was recorded in 122 (76%) cases. Road traffic incidents (RTI) were the main cause of TBI (85%), with professional bike riders being frequently involved (27%, 43/160). Assaults (7.5%) and falls (2.5%) represent the second and third most common causes of TBI in Cameroon, respectively. Only 15 patients were transported to the hospital by ambulance, and 14 of these were from a referring hospital. CT-imaging was performed in 78% (125/160) of cases intracranial traumatic abnormality was identified in 77/125 (64%) cases. Financial constraints were the main reason for not performing a CT scan on 35 patients. A total of 46 (33%) patients were discharged against medical advice (DAMA) due to financial constraints. Mortality was 14% (22/160) but disproportionately high in patients with severe TBI (46%). DAMA had poor outcomes with QoLIBRI. Only 4 patients received post-injury physiotherapy services. Conclusion: TBI in Cameroon mainly results from RTIs and commonly affects young adult males, and low educational or socioeconomic status and commercial bike riding appear to be predisposing factors. Lack of pre-hospital care, financial constraints limiting both CT-scanning and medical care, and lack of acute physiotherapy services likely influenced care and outcomes adversely.Keywords: characteristics, traumatic brain injury, outcome, disparities in care, prospective study
Procedia PDF Downloads 1231055 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 160