Search results for: performance and quality
2960 Rational Approach to the Design of a Sustainable Drainage System for Permanent Site of Federal Polytechnic Oko: A Case Study for Flood Mitigation and Environmental Management
Authors: Fortune Chibuike Onyia, Femi Ogundeji Ayodele
Abstract:
The design of a drainage system at the permanent site of Federal Polytechnic Oko in Anambra State is critical for mitigating flooding, managing surface runoff, and ensuring environmental sustainability. The design process employed a comprehensive analysis involving topographical surveys, hydraulic modeling, and the assessment of local soil types to ensure stability and efficient water conveyance. Proper slope gradients were considered to maintain adequate flow velocities and avoid sediment deposition, which could hinder long-term performance. From the result, the channel size estimated was 0.199m by 0.0199m and 0.0199m². This study proposed a channel size of 1.4m depth by 0.5m width and 0.7m², optimized to accommodate the anticipated peak flow resulting from heavy rainfall and storm-water events. This sizing is based on hydrological data, which takes into account rainfall intensity, runoff coefficients, and catchment area characteristics. The objective is to effectively convey storm-water while preventing overflow, erosion, and subsequent damage to infrastructure and properties. This sustainable approach incorporates provisions for maintenance and aligns with urban drainage standards to enhance durability and reliability. Implementing this drainage system will mitigate flood risks, safeguard campus facilities, improve overall water management, and contribute to the development of resilient infrastructure at Federal Polytechnic Oko.Keywords: flood mitigation, drainage system, sustainable design, environmental management
Procedia PDF Downloads 132959 Analysis of the Learning Effectiveness of the Steam-6e Course: A Case Study on the Development of Virtual Idol Product Design as an Example
Authors: Mei-Chun. Chang
Abstract:
STEAM (Science, Technology, Engineering, Art, and Mathematics) represents a cross-disciplinary and learner-centered teaching model that cultivates students to link theory with the presentation of real situations, thereby improving their various abilities. This study explores students' learning performance after using the 6E model in STEAM teaching for a professional course in the digital media design department of technical colleges, as well as the difficulties and countermeasures faced by STEAM curriculum design and its implementation. In this study, through industry experts’ work experience, activity exchanges, course teaching, and experience, learners can think about the design and development value of virtual idol products that meet the needs of users and to employ AR/VR technology to innovate their product applications. Applying action research, the investigation has 35 junior students from the department of digital media design of the school where the researcher teaches as the research subjects. The teaching research was conducted over two stages spanning ten weeks and 30 sessions. This research collected the data and conducted quantitative and qualitative data sorting analyses through ‘design draft sheet’, ‘student interview record’, ‘STEAM Product Semantic Scale’, and ‘Creative Product Semantic Scale (CPSS)’. Research conclusions are presented, and relevant suggestions are proposed as a reference for teachers or follow-up researchers. The contribution of this study is to teach college students to develop original virtual idols and product designs, improve learning effectiveness through STEAM teaching activities, and effectively cultivate innovative and practical cross-disciplinary design talents.Keywords: STEAM, 6E model, virtual idol, learning effectiveness, practical courses
Procedia PDF Downloads 1272958 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 912957 Generalized Additive Model for Estimating Propensity Score
Authors: Tahmidul Islam
Abstract:
Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching
Procedia PDF Downloads 3692956 Effects of Soaking of Maize on the Viscosity of Masa and Tortilla Physical Properties at Different Nixtamalization Times
Authors: Jorge Martínez-Rodríguez, Esther Pérez-Carrillo, Diana Laura Anchondo Álvarez, Julia Lucía Leal Villarreal, Mariana Juárez Dominguez, Luisa Fernanda Torres Hernández, Daniela Salinas Morales, Erick Heredia-Olea
Abstract:
Maize tortillas are a staple food in Mexico which are mostly made by nixtamalization, which includes the cooking and steeping of maize kernels in alkaline conditions. The cooking step in nixtamalization demands a lot of energy and also generates nejayote, a water pollutant, at the end of the process. The aim of this study was to reduce the cooking time by adding a maize soaking step before nixtamalization while maintaining the quality properties of masa and tortillas. Maize kernels were soaked for 36 h to increase moisture up to 36%. Then, the effect of different cooking times (0, 5, 10, 15, 20, 20, 25, 30, 35, 45-control and 50 minutes) was evaluated on viscosity profile (RVA) of masa to select the treatments with a profile similar or equal to control. All treatments were left steeping overnight and had the same milling conditions. Treatments selected were 20- and 25-min cooking times which had similar values for pasting temperature (79.23°C and 80.23°C), Maximum Viscosity (105.88 Cp and 96.25 Cp) and Final Viscosity (188.5 Cp and 174 Cp) to those of 45 min-control (77.65 °C, 110.08 Cp, and 186.70 Cp, respectively). Afterward, tortillas were produced with the chosen treatments (20 and 25 min) and for control, then were analyzed for texture, damage starch, colorimetry, thickness, and average diameter. Colorimetric analysis of tortillas only showed significant differences for yellow/blue coordinates (b* parameter) at 20 min (0.885), unlike the 25-minute treatment (1.122). Luminosity (L*) and red/green coordinates (a*) showed no significant differences from treatments with respect control (69.912 and 1.072, respectively); however, 25 minutes was closer in both parameters (73.390 and 1.122) than 20 minutes (74.08 and 0.884). For the color difference, (E), the 25 min value (3.84) was the most similar to the control. However, for tortilla thickness and diameter, the 20-minute with 1.57 mm and 13.12 cm respectively was closer to those of the control (1.69 mm and 13.86 cm) although smaller to it. On the other hand, the 25 min treatment tortilla was smaller than both 20 min and control with 1.51 mm thickness and 13.590 cm diameter. According to texture analyses, there was no difference in terms of stretchability (8.803-10.308 gf) and distance for the break (95.70-126.46 mm) among all treatments. However, for the breaking point, all treatments (317.1 gf and 276.5 gf for 25 and 20- min treatment, respectively) were significantly different from the control tortilla (392.2 gf). Results suggest that by adding a soaking step and reducing cooking time by 25 minutes, masa and tortillas obtained had similar functional and textural properties to the traditional nixtamalization process.Keywords: tortilla, nixtamalization, corn, lime cooking, RVA, colorimetry, texture, masa rheology
Procedia PDF Downloads 1812955 Teaching Kindness as Moral Virtue in Preschool Children: The Effectiveness of Picture-Storybook Reading and Hand-Puppet Storytelling
Authors: Rose Mini Agoes Salim, Shahnaz Safitri
Abstract:
The aim of this study is to test the effectiveness of teaching kindness in preschool children by using several techniques. Kindness is a physical act or emotional support aimed to build or maintain relationships with others. Kindness is known to be essential in the development of moral reasoning to distinguish between the good and bad things. In this study, kindness is operationalized as several acts including helping friends, comforting sad friends, inviting friends to play, protecting others, sharing, saying hello, saying thank you, encouraging others, and apologizing. It is mentioned that kindness is crucial to be developed in preschool children because this is the time the children begin to interact with their social environment through play. Furthermore, preschool children's cognitive development makes them begin to represent the world with words, which then allows them to interact with others. On the other hand, preschool children egocentric thinking makes them still need to learn to consider another person's perspective. In relation to social interaction, preschool children need to be stimulated and assisted by adult to be able to pay attention to other and act with kindness toward them. On teaching kindness to children, the quality of interaction between children and their significant others is the key factor. It is known that preschool children learn about kindness by imitating adults on their two way interaction. Specifically, this study examines two types of teaching techniques that can be done by parents as a way to teach kindness, namely the picture-storybook reading and hand-puppet storytelling. These techniques were examined because both activities are easy to do and both also provide a model of behavior for the child based on the character in the story. To specifically examine those techniques effectiveness in teaching kindness, two studies were conducted. Study I involves 31 children aged 5-6 years old with picture-storybook reading technique, where the intervention is done by reading 8 picture books for 8 days. In study II, hand-puppet storytelling technique is examined to 32 children aged 3-5 years old. The treatments effectiveness are measured using an instrument in the form of nine colored cards that describe the behavior of kindness. Data analysis using Wilcoxon Signed-rank test shows a significant difference on the average score of kindness (p < 0.05) before and after the intervention has been held. For daily observation, a ‘kindness tree’ and observation sheets are used which are filled out by the teacher. Two weeks after interventions, an improvement on all kindness behaviors measured is intact. The same result is also gained from both ‘kindness tree’ and observational sheets.Keywords: kindness, moral teaching, storytelling, hand puppet
Procedia PDF Downloads 2532954 Risk Assessment on New Bio-Composite Materials Made from Water Resource Recovery
Authors: Arianna Nativio, Zoran Kapelan, Jan Peter van der Hoek
Abstract:
Bio-composite materials are becoming increasingly popular in various applications, such as the automotive industry. Usually, bio-composite materials are made from natural resources recovered from plants, now, a new type of bio-composite material has begun to be produced in the Netherlands. This material is made from resources recovered from drinking water treatments (calcite), wastewater treatment (cellulose), and material from surface water management (aquatic plants). Surface water, raw drinking water, and wastewater can be contaminated with pathogens and chemical compounds. Therefore, it would be valuable to develop a framework to assess, monitor, and control the potential risks. Indeed, the goal is to define the major risks in terms of human health, quality of materials, and environment associated with the production and application of these new materials. This study describes the general risk assessment framework, starting with a qualitative risk assessment. The qualitative risk analysis was carried out by using the HAZOP methodology for the hazard identification phase. The HAZOP methodology is logical and structured and able to identify the hazards in the first stage of the design when hazards and associated risks are not well known. The identified hazards were analyzed to define the potential associated risks, and then these were evaluated by using the qualitative Event Tree Analysis. ETA is a logical methodology used to define the consequences for a specific hazardous incidents, evaluating the failure modes of safety barriers and dangerous intermediate events that lead to the final scenario (risk). This paper shows the effectiveness of combining of HAZOP and qualitative ETA methodologies for hazard identification and risk mapping. Then, key risks were identified, and a quantitative framework was developed based on the type of risks identified, such as QMRA and QCRA. These two models were applied to assess human health risks due to the presence of pathogens and chemical compounds such as heavy metals into the bio-composite materials. Thus, due to these contaminations, the bio-composite product, during its application, might release toxic substances into the environment leading to a negative environmental impact. Therefore, leaching tests are going to be planned to simulate the application of these materials into the environment and evaluate the potential leaching of inorganic substances, assessing environmental risk.Keywords: bio-composite, risk assessment, water reuse, resource recovery
Procedia PDF Downloads 1112953 Treatment of Papillary Thyroid Carcinoma Metastasis to the Sternum: A Case Report
Authors: Geliashvili T. M., Tyulyandina A. S., Valiev A. K., Kononets P. V., Kharatishvili T. K., Salkov A. G., Pronin A. I., Gadzhieva E. H., Parnas A. V., Ilyakov V. S.
Abstract:
Aim/Introduction: Metastasis (Mts) to the sternum, while extremely rare in differentiated thyroid cancer (DTC) (1), requires a personalized, multidisciplinary treatment approach. In aggressively growing Mts to the sternum, which rapidly become unresectable, a comprehensive therapeutic and diagnostic approach is particularly important. Materials and methods: We present a clinical case of solitary Mts to the sternum as first manifestation of a papillary thyroid microcarcinoma in a 55-year-old man. Results: 18F-FDG PET/CT after thyroidectomy confirmed the solitary Mts to the sternum with extremely high FDG uptake (SUVmax=71,1), which predicted its radioiodine-refractory (RIR). Due to close attachment to the mediastinum and rapid growth, Mts was considered unresectable. During the next three months, the patient received targeted therapy with the tyrosine kinase inhibitor (TKI) Lenvatinib 24 mg per day. 1st course of radioiodine therapy (RIT) 6 GBq was also performed, the results of which confirmed the RIR of the tumor process. As a result of systemic therapy (targeted therapy combined with RIT and suppressive hormone therapy with L-thyroxine), there was a significant biochemical response (decrease of serum thyroglobulin level from 50,000 ng/ml to 550 ng/ml) and a partial response with decrease of tumor size (from 80x69x123 mm to 65x50x112 mm) and decrease of FDG accumulation (SUVmax from 71.1 to 63). All of this made possible to perform surgical treatment of Mts - sternal extirpation with its replacement by an individual titanium implant. At the control examination, the stimulated thyroglobulin level was only 134 ng/ml, and PET/CT revealed postoperative areas of 18F-FDG metabolism in the removed sternal Mts. Also, 18F-FDG PET/CT in the early (metabolic) stage revealed two new bone Mts (in the area of L3 SUVmax=17,32 and right iliac bone SUVmax=13,73), which, as well as the removed sternal Mts, appeared to be RIRs at the 2nd course of RIT 6 GBq. Subsequently, on 02.2022, external beam radiation therapy (EBRT) was performed on the newly identified oligometastatic bone foci. At present, the patient is under dynamic monitoring and in the process of suppressive hormone therapy with L-thyroxine. Conclusion: Thus, only due to the early prescription of targeted TKI therapy was it possible to perform surgical resection of Mts to the sternum, thereby improve the patient's quality of life and preserve the possibility of radical treatment in case of oligometastatic disease progression.Keywords: differentiated thyroid cancer, metastasis to the sternum, radioiodine therapy, radioiodine-refractory cancer, targeted therapy, lenvatinib
Procedia PDF Downloads 1092952 Application of Water Soluble Polymers in Chemical Enhanced Oil Recovery
Authors: M. Shahzad Kamal, Abdullah S. Sultan, Usamah A. Al-Mubaiyedh, Ibnelwaleed A. Hussein
Abstract:
Oil recovery from reservoirs using conventional oil recovery techniques like water flooding is less than 20%. Enhanced oil recovery (EOR) techniques are applied to recover additional oil. Surfactant-polymer flooding is a promising EOR technique used to recover residual oil from reservoirs. Water soluble polymers are used to increase the viscosity of displacing fluids. Surfactants increase the capillary number by reducing the interfacial tension between oil and displacing fluid. Hydrolyzed polyacrylamide (HPAM) is widely used in polymer flooding applications due to its low cost and other desirable properties. HPAM works well in low-temperature and low salinity-environment. In the presence of salts HPAM viscosity decrease due to charge screening effect and it can precipitate at high temperatures in the presence of salts. Various strategies have been adopted to extend the application of water soluble polymers to high-temperature high-salinity (HTHS) reservoir. These include addition of monomers to acrylamide chain that can protect it against thermal hydrolysis. In this work, rheological properties of various water soluble polymers were investigated to find out suitable polymer and surfactant-polymer systems for HTHS reservoirs. Polymer concentration ranged from 0.1 to 1 % (w/v). Effect of temperature, salinity and polymer concentration was investigated using both steady shear and dynamic measurements. Acrylamido tertiary butyl sulfonate based copolymer showed better performance under HTHS conditions compared to HPAM. Moreover, thermoviscosifying polymer showed excellent rheological properties and increase in the viscosity was observed with increase temperature. This property is highly desirable for EOR application.Keywords: rheology, polyacrylamide, salinity, enhanced oil recovery, polymer flooding
Procedia PDF Downloads 4132951 Characterization of Natural Polymers for Guided Bone Regeneration Applications
Authors: Benedetta Isella, Aleksander Drinic, Alissa Heim, Phillip Czichowski, Lisa Lauts, Hans Leemhuis
Abstract:
Introduction: Membranes for guided bone regeneration are essential to perform a barrier function between the soft and the regenerating bone tissue. Bioabsorbable membranes are desirable in this field as they do not require a secondary surgery for removal, decreasing patient surgical risk. Collagen was the first bioabsorbable alternative introduced on the market, but its degradation time may be too fast to guarantee bone regeneration, and optimisation is needed. Silk fibroin, being biocompatible, slowly bioabsorbable, and processable into different scaffold types, could be a promising alternative. Objectives: The objective is to compare the general performance of a silk fibroin membrane for guided bone regeneration to current collagen alternatives developing suitable standardized tests for the mechanical and morphological characterization. Methods: Silk fibroin and collagen-based membranes were compared from the morphological and chemical perspective, with techniques such as SEM imaging and from the mechanical point of view with techniques such as tensile and suture retention strength (SRS) tests. Results: Silk fibroin revealed a high degree of reproducibility in surface density. The SRS of silk fibroin (0.76 ± 0.04 N), although lower than collagen, was still comparable to native tissues such as the internal mammary artery (0.56 N), and the same can be extended to general mechanical behaviour in tensile tests. The SRS could be increased by an increase in thickness. Conclusion: Silk fibroin is a promising material in the field of guided bone regeneration, covering the interesting position of not being considered a product containing cells or tissues of animal origin from the regulatory perspective and having longer degradation times with respect to collagen.Keywords: guided bone regeneration, mechanical characterization, membrane, silk fibroin
Procedia PDF Downloads 452950 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation
Procedia PDF Downloads 2932949 Integrated Coastal Management for the Sustainable Development of Coastal Cities: The Case of El-Mina, Tripoli, Lebanon
Authors: G. Ghamrawi, Y. Abunnasr, M. Fawaz, S. Yazigi
Abstract:
Coastal cities are constantly exposed to environmental degradation and economic regression fueled by rapid and uncontrolled urban growth as well as continuous resource depletion. This is the case of the City of Mina in Tripoli (Lebanon), where lack of awareness to preserve social, ecological, and historical assets, coupled with the increasing development pressures, are threatening the socioeconomic status of the city residents, the quality of life and accessibility to the coast. To address these challenges, a holistic coastal urban design and planning approach was developed to analyze the environmental, political, legal, and socioeconomic context of the city. This approach aims to investigate the potential of balancing urban development with the protection and enhancement of cultural, ecological, and environmental assets under an integrated coastal zone management approach (ICZM). The analysis of Mina's different sectors adopted several tools that include direct field observation, interviews with stakeholders, analysis of available data, historical maps, and previously proposed projects. The findings from the analysis were mapped and graphically represented, allowing the recognition of character zones that become the design intervention units. Consequently, the thesis proposes an urban, city-scale intervention that identifies 6 different character zones (the historical fishing port, Abdul Wahab island, the abandoned Port Said, Hammam el Makloub, the sand beach, and the new developable area) and proposes context-specific design interventions that capitalize on the main characteristics of each zone. Moreover, the intervention builds on the institutional framework of ICZM as well as other studies previously conducted for the coast and adopts nature-based solutions with hybrid systems for providing better environmental design solutions for developing the coast. This enables the realization of an all-inclusive, well-connected shoreline with easy and free access towards the sea; a developed shoreline with an active local economy, and an improved urban environment.Keywords: blue green infrastructure, coastal cities, hybrid solutions, integrated coastal zone management, sustainable development, urban planning
Procedia PDF Downloads 1582948 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem
Authors: Renata Kurpiewska-Korbut
Abstract:
Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine
Procedia PDF Downloads 932947 The Developmental Model of Teaching and Learning Clinical Practicum at Postpartum Ward for Nursing Students by Using VARK Learning Styles
Authors: Wanwadee Neamsakul
Abstract:
VARK learning style is an effective method of learning that could enhance all skills of the students like visual (V), auditory (A), read/write (R), and kinesthetic (K). This learning style benefits the students in terms of professional competencies, critical thinking and lifelong learning which are the desirable characteristics of the nursing students. This study aimed to develop a model of teaching and learning clinical practicum at postpartum ward for nursing students by using VARK learning styles, and evaluate the nursing students’ opinions about the developmental model. A methodology used for this study was research and development (R&D). The model was developed by focus group discussion with five obstetric nursing instructors who have experiences teaching Maternal Newborn and Midwifery I subject. The activities related to practices in the postpartum (PP) ward including all skills of VARK were assigned into the matrix table. The researcher asked the experts to supervise the model and adjusted the model following the supervision. Subsequently, it was brought to be tried out with the nursing students who practiced on the PP ward. Thirty third year nursing students from one of the northern Nursing Colleges, Academic year 2015 were purposive sampling. The opinions about the satisfaction of the model were collected using a questionnaire which was tested for its validity and reliability. Data were analyzed using descriptive statistics. The developed model composed of 27 activities. Seven activities were developed as enhancement of visual skills for the nursing students (25.93%), five activities as auditory skills (18.52%), six activities as read and write skills (22.22%), and nine activities as kinesthetic skills (33.33%). Overall opinions about the model were reported at the highest level of average satisfaction (mean=4.63, S.D=0.45). In the aspects of visual skill (mean=4.80, S.D=0.45) was reported at the highest level of average satisfaction followed by auditory skill (mean=4.62, S.D=0.43), read and write skill (mean=4.57, S.D=0.46), and kinesthetic skill (mean=4.53, S.D=0.45) which were reported at the highest level of average satisfaction, respectively. The nursing students reported that the model could help them employ all of their skills during practicing and taking care of the postpartum women and newborn babies. They could establish self-confidence while providing care and felt proud of themselves by the benefits of the model. It can be said that using VARK learning style to develop the model could enhance both nursing students’ competencies and positive attitude towards the nursing profession. Consequently, they could provide quality care for postpartum women and newborn babies effectively in the long run.Keywords: model, nursing students, postpartum ward, teaching and learning clinical practicum
Procedia PDF Downloads 1522946 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis
Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei
Abstract:
Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE
Procedia PDF Downloads 3412945 Dynamic Determination of Spare Engine Requirements for Air Fighters Integrating Feedback of Operational Information
Authors: Tae Bo Jeon
Abstract:
Korean air force is undertaking a big project to replace prevailing hundreds of old air fighters such as F-4, F-5, KF-16 etc. The task is to develop and produce domestic fighters equipped with 2 complete-type engines each. A large number of engines, however, will be purchased as products from a foreign engine maker. In addition to the fighters themselves, secure the proper number of spare engines serves a significant role in maintaining combat readiness and effectively managing the national defense budget due to high cost. In this paper, we presented a model dynamically updating spare engine requirements. Currently, the military administration purchases all the fighters, engines, and spare engines at acquisition stage and does not have additional procurement processes during the life cycle, 30-40 years. With the assumption that procurement procedure during the operational stage is established, our model starts from the initial estimate of spare engine requirements based on limited information. The model then performs military missions and repair/maintenance works when necessary. During operation, detailed field information - aircraft repair and test, engine repair, planned maintenance, administration time, transportation pipeline between base, field, and depot etc., - should be considered for actual engine requirements. At the end of each year, the performance measure is recorded and proceeds to next year when it shows higher the threshold set. Otherwise, additional engine(s) will be bought and added to the current system. We repeat the process for the life cycle period and compare the results. The proposed model is seen to generate far better results appropriately adding spare engines thus avoiding possible undesirable situations. Our model may well be applied to future air force military operations.Keywords: DMSMS, operational availability, METRIC, PRS
Procedia PDF Downloads 1752944 The Roles of Parental Involvement in the Teaching-Learning Process of Students with Special Needs: Perceptions of Special Needs Education Teachers
Authors: Chassel T. Paras, Tryxzy Q. Dela Cruz, Ma. Carmela Lousie V. Goingco, Pauline L. Tolentino, Carmela S. Dizon
Abstract:
In implementing inclusive education, parental involvement is measured to be an irreplaceable contributing factor. Parental involvement is described as an indispensable aspect of the teaching-learning process and has a remarkable effect on the student's academic performance. However, there are still differences in the viewpoints, expectations, and needs of both parents and teachers that are not yet fully conveyed in their relationship; hence, the perceptions of SNED teachers are essential in their collaboration with parents. This qualitative study explored how SNED teachers perceive the roles of parental involvement in the teaching-learning process of students with special needs. To answer this question, one-on-one face-to-face semi-structured interviews with three SNED teachers in a selected public school in Angeles City, Philippines, that offer special needs education services were conducted. The gathered data are then analyzed using Interpretative Phenomenological Analysis (IPA). The results revealed four superordinate themes, which include: (1) roles of parental involvement, (2) parental involvement opportunities, (3) barriers to parental involvement, and (4) parent-teacher collaboration practices. These results indicate that SNED teachers are aware of the roles and importance of parental involvement; however, despite parent-teacher collaboration, there are still barriers that impede parental involvement. Also, SNED teachers acknowledge the big roles of parents as they serve as main figures in the teaching-learning process of their children with special needs. Lastly, these results can be used as input in developing a school-facilitated parenting involvement framework that encompasses the contribution of SNED teachers in planning, developing, and evaluating parental involvement programs, which future researchers can also use in their studiesKeywords: parental involvement, special needs education, teaching-learning process, teachers’ perceptions, special needs education teachers, interpretative phenomenological analysis
Procedia PDF Downloads 1162943 Diabetes and Medical Plant's Treatment: Ethnobotanical Studies Carried out in Morocco
Authors: Jamila Fakchich, Mostafa Jamila Lazaar Elachouri, Lakhder Fakchich, Fatna Ouali, Abd Errazzak Belkacem
Abstract:
Diabetes is a chronic metabolic disease that has a significant impact on the health, quality of life, and life expectancy of patients as well as the health care system. By its nature diabetes, is a multisystem disease with wide-ranging complication that span nearly all region of the body. This epidemic problem, however, is not unique to the industrialized society, but has also hardly struck the developing countries. In Morocco, as developing country, there is an epidemic rise in diabetes, with ensuing concern about the management and control of this disease; it began a chronic burdensome disease of largely middle-aged and elderly people, with a long course and serious complications often resulting in high death-rate, the treatment of diabetes spent vast amount of resources including medicines, diets, physical training. Treatment of this disease is considered problematic due to the lack of effective and safe drugs capable of inducing sustained clinical, biochemical, and histological cure. In Moroccan society, the phytoremedies are some times the only affordable sources of healthcare, particularly for the people in remote areas. In this paper, we present a synthesis work obtained from the ethnobotanical data reported in different specialized journals. A Synthesis of four published ethnobotanical studies that have been carried out in different region of Morocco by different team seekers during the period from 1997 to 2015. Medicinal plants inventoried by different seekers in four Moroccan’s areas have been regrouped and codified, then, Factorial Analysis (FA) and Principal Components Analysis (PCA) are used to analyse the aggregated data from the four studies and plants are classified according to their frequency of use by population. Our work deals with an attempt to gather information on some traditional uses of medicinal plants from different regions of Morocco, also, it was designed to give a set of medicinal plants commonly used by Moroccan people in the treatment of diabetes; In this paper, we intended to provide a basic knowledge about plant species used by Moroccan society for treatment of diabetes. One of the most interesting aspects of this type of works is to assess the relative cultural importance of medicinal plants for specific illnesses and exploring its usefulness in the context of diabetes.Keywords: Morocco, medicinal plants, ethnobotanical, diabetes, phytoremedies
Procedia PDF Downloads 3342942 Geography Undergraduates 360⁰ Academic Peer Learning And Mentoring 2021 – 2023: A Pilot Study
Authors: N. Ayob, N. C. Nkosi, R. P. Burger, S. J. Piketh, F. Letlaila, O. Maphosa
Abstract:
South African higher tertiary institution have been faced with high dropout rates. About 50 to 60% of first years drop out to due to various reasons one being inadequate academic support. Geography 111 (GEOG 111) module is historically known for having below 50% pass rate, high dropout rate and identified as a first year risk module. For the first time GEOG 111 (2021) on the Mahikeng Campus admitted 150 students pursuing more than 6 different qualifications (BA and BSc) from the Humanities Faculty and FNAS. First year students had difficulties transitioning from secondary to tertiary institutions as we shifted to remote learning while we navigate through the Covid-19 pandemic. The traditional method of teaching does not encourage students to help each other. With remote learning we do not have control over what the students share and perhaps this can be a learning opportunity to embrace peer learning and change the manner in which we assess the students. The purpose of this pilot study was to assist GEOG111 students with academic challenges whilst improving their University experience. This was a qualitative study open to all GEOG111, repeaters, students who are not confident in their Geographical knowledge and never did Geography at high school level. The selected 9 Golden Key International Honour Society Geography mentors attended an academic mentor training program with module lecturers. About 17.6% of the mentees did not have a geography background however, 94% of the mentees passed, 1 mentee had a mark of 38%. 11 of the participants had a mark >60% with one student that excelled 70%. It is evident that mentorship helped students reach their academic potential. Peer learning and mentoring are associated with improved academic performance and allows the students to take charge of their learning and academic experience. Thus an important element as we transform pedagogies at higher learning institutions.Keywords: geography, risk module, peer mentoring, peer learning
Procedia PDF Downloads 1582941 Tribological Properties of Non-Stick Coatings Used in Bread Baking Process
Authors: Maurice Brogly, Edwige Privas, Rajesh K. Gajendran, Sophie Bistac
Abstract:
Anti-sticky coatings based on perfluoroalkoxy (PFA) coatings are widely used in food processing industry especially for bread making. Their tribological performance, such as low friction coefficient, low surface energy and high heat resistance, make them an appropriate choice for anti-sticky coating application in moulds for food processing industry. This study is dedicated to evidence the transfer of contaminants from the coating due to wear and thermal ageing of the mould. The risk of contamination is induced by the damage of the coating by bread crust during the demoulding stage. The study focuses on the wear resistance and potential transfer of perfluorinated polymer from the anti-sticky coating. Friction between perfluorinated coating and bread crust is modeled by a tribological pin-on-disc test. The cellular nature of the bread crust is modeled by a polymer foam. FTIR analysis of the polymer foam after friction allow the evaluation of the transfer from the perfluorinated coating to polymer foam. Influence of thermal ageing on the physical, chemical and wear properties of the coating are also investigated. FTIR spectroscopic results show that the increase of PFA transfer onto the foam counterface is associated to the decrease of the friction coefficient. Increasing lubrication by film transfer results in the decrease of the friction coefficient. Moreover increasing the friction test parameters conditions (load, speed and sliding distance) also increase the film transfer onto the counterface. Thermal ageing increases the hydrophobic character of the PFA coating and thus also decreases the friction coefficient.Keywords: fluorobased polymer coatings, FTIR spectroscopy, non-stick food moulds, wear and friction
Procedia PDF Downloads 3342940 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 782939 Computer-Aided Diagnosis of Eyelid Skin Tumors Using Machine Learning
Authors: Ofira Zloto, Ofir Fogel, Eyal Klang
Abstract:
Purpose: The aim is to develop an automated framework based on machine learning to diagnose malignant eyelid skin tumors. Methods: This study utilized eyelid lesion images from Sheba Medical Center, a large tertiary center in Israel. Before model training, we pre-trained our models on the ISIC 2019 dataset consisting of 25,332 images. The proprietary eyelid dataset was then used for fine-tuning. The dataset contained multiple images per patient, aiming to classify malignant lesions in comparison to benign counterparts. Results: The analyzed dataset consisted of images representing both benign and malignant eyelid lesions. For the benign category, a total of 373 images were sourced. In comparison, the malignant category has 186 images. Based on the accuracy values, the model with 3 epochs and a learning rate of 0.0001 exhibited the best performance, achieving an accuracy of 0.748 with a standard deviation of 0.034. At a sensitivity of 69%, the model has a corresponding specificity of 82%. To further understand the decision-making process of our model, we employed heatmap visualization techniques, specifically Gradient-weighted Class Activation Mapping. Discussion: This study introduces a dependable model-aided diagnostic technology for assessing eyelid skin lesions. The model demonstrated accuracy comparable to human evaluation, effectively determining whether a lesion raises a high suspicion of malignancy or is benign. Such a model has the potential to alleviate the burden on the healthcare system, particularly benefiting rural areas and enhancing the efficiency of clinicians and overall healthcare.Keywords: machine learning;, eyelid skin tumors;, decision-making process;, heatmap visualization techniques
Procedia PDF Downloads 62938 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4232937 Land Degradation Vulnerability Modeling: A Study on Selected Micro Watersheds of West Khasi Hills Meghalaya, India
Authors: Amritee Bora, B. S. Mipun
Abstract:
Land degradation is often used to describe the land environmental phenomena that reduce land’s original productivity both qualitatively and quantitatively. The study of land degradation vulnerability primarily deals with “Environmentally Sensitive Areas” (ESA) and the amount of topsoil loss due to erosion. In many studies, it is observed that the assessment of the existing status of land degradation is used to represent the vulnerability. Moreover, it is also noticed that in most studies, the primary emphasis of land degradation vulnerability is to assess its sensitivity to soil erosion only. However, the concept of land degradation vulnerability can have different objectives depending upon the perspective of the study. It shows the extent to which changes in land use land cover can imprint their effect on the land. In other words, it represents the susceptibility of a piece of land to degrade its productive quality permanently or in the long run. It is also important to mention that the vulnerability of land degradation is not a single factor outcome. It is a probability assessment to evaluate the status of land degradation and needs to consider both biophysical and human induce parameters. To avoid the complexity of the previous models in this regard, the present study has emphasized on to generate a simplified model to assess the land degradation vulnerability in terms of its current human population pressure, land use practices, and existing biophysical conditions. It is a “Mixed-Method” termed as the land degradation vulnerability index (LDVi). It was originally inspired by the MEDALUS model (Mediterranean Desertification and Land Use), 1999, and Farazadeh’s 2007 revised version of it. It has followed the guidelines of Space Application Center, Ahmedabad / Indian Space Research Organization for land degradation vulnerability. The model integrates the climatic index (Ci), vegetation index (Vi), erosion index (Ei), land utilization index (Li), population pressure index (Pi), and cover management index (CMi) by giving equal weightage to each parameter. The final result shows that the very high vulnerable zone primarily indicates three (3) prominent circumstances; land under continuous population pressure, high concentration of human settlement, and high amount of topsoil loss due to surface runoff within the study sites. As all the parameters of the model are amalgamated with equal weightage further with the help of regression analysis, the LDVi model also provides a strong grasp of each parameter and how far they are competent to trigger the land degradation process.Keywords: population pressure, land utilization, soil erosion, land degradation vulnerability
Procedia PDF Downloads 1682936 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities
Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings
Procedia PDF Downloads 952935 Enhancing Secondary School Mathematics Retention with Blended Learning: Integrating Concepts for Improved Understanding
Authors: Felix Oromena Egara, Moeketsi Mosia
Abstract:
The study aimed to evaluate the impact of blended learning on mathematics retention among secondary school students. Conducted in the Isoko North Local Government Area of Delta State, Nigeria, the research involved 1,235 senior class one (SS 1) students. Employing a non-equivalent control group pre-test-post-test quasi-experimental design, a sample of 70 students was selected from two secondary schools with ICT facilities through purposive sampling. Random allocation of students into experimental and control groups was achieved through balloting within each selected school. The investigation included three assessment points: pre-Mathematics Achievement Test (MAT), post-MAT, and post-post-MAT (retention), administered systematically by the researchers. Data collection utilized the established MAT instrument, which demonstrated a high reliability score of 0.86. Statistical analysis was conducted using the Statistical Package for Social Sciences (SPSS) version 28, with mean and standard deviation addressing study questions and analysis of covariance scrutinizing hypotheses at a significance level of .05. Results revealed significantly greater improvements in mathematics retention scores among students exposed to blended learning compared to those instructed through conventional methods. Moreover, noticeable differences in mean retention scores were observed, with male students in the blended learning group exhibiting notably higher performance. Based on these findings, recommendations were made, advocating for mathematics educators to integrate blended learning, particularly in geometry teaching, to enhance students’ retention of mathematical concepts.Keywords: blended learning, flipped classroom model, secondary school students, station rotation model
Procedia PDF Downloads 472934 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 1512933 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 692932 Auto Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.Keywords: auto-calibration, Gilan, large-scale water resources, simulation
Procedia PDF Downloads 3352931 The Promotion of a Risk Culture: a Descriptive Study of Ghanaian Banks
Authors: Gerhard Grebe, Johan Marx
Abstract:
The aim of the study is to assess the state of operational risk management and the adoption of an appropriate risk culture in Ghanaian banks. The Bank of Ghana (BoG) joined the Basel Consultative Group (BCG) of the Basel Committee on Bank Supervision (BCBS) in 2021 and is proceeding with the implementation of the Basel III international regulatory framework for banks. The BoG’s Directive about risk management encourages, inter alia, the creation of an appropriate risk culture by Ghanaian banks. However, it is not evident how the risk management staff of Ghanaian banks experience the risk culture and the implementation of operational risk management in the banks where they are employed. Ghana is a developing economy, and it is addressing challenges with its organisational culture. According to Transparency International, successive Ghanaian governments claim to be fighting corruption, but little success has been achieved so far. This points to a possible lack of accountability, transparency, and integrity in the environment in which Ghanaian banks operate and which could influence their risk culture negatively. Purposive sampling was used for the survey, and the questionnaire was completed byGhanaian bank personnel who specializesin operational risk management, risk governance, and compliance, bank supervision, risk analyses, as well as the implementation of the operational risk management requirements of the Basel regulatory frameworks. The respondents indicated that they are fostering a risk culture and implementing monitoring and reporting procedures; the three lines of defence (3LOD); compliance; internal auditing; disclosure of operational risk information; and receiving guidance from the bank supervisor in an attempt to improve their operational risk management practices. However, the respondents reported the following challenges with staff members who are not inside the risk management departments(in order of priority), namelydemonstrating a risk culture, training and development; communication; reporting and disclosure; roles and responsibilities; performance appraisal; and technological and environmental barriers. Recommendations to address these challenges are providedKeywords: ghana, operational risk, risk culture, risk management
Procedia PDF Downloads 125