Search results for: hydrological models
760 Development of Coir Reinforced Composite for Automotive Parts Application
Authors: Okpala Charles Chikwendu, Ezeanyim Okechukwu Chiedu, Onukwuli Somto Kenneth
Abstract:
The demand for lightweight and fuel-efficient automobiles has led to the use of fiber-reinforced polymer composites in place of traditional metal parts. Coir, a natural fiber, offers qualities such as low cost, good tensile strength, and biodegradability, making it a potential filler material for automotive components. However, poor interfacial adhesion between coir and polymeric matrices has been a challenge. To address poor interfacial adhesion with polymeric matrices due to their moisture content and method of preparation, the extracted coir was chemically treated using NaOH. To develop a side view mirror encasement by investigating the mechanical effect of fiber percentage composition, fiber length and percentage composition of Epoxy in a coir fiber reinforced composite, polyester was adopted as the resin for the mold, while that of the product is Epoxy. Coir served as the filler material for the product. Specimens with varied compositions of fiber loading (15, 30 and 45) %, length (10, 15, 20, 30 and 45) mm, and (55, 70, 85) % weight of epoxy resin were fabricated using hand lay-up technique, while those specimens were later subjected to mechanical tests (Tensile, Flexural and Impact test). The results of the mechanical test showed that the optimal solution for the input factors is coir at 45%, epoxy at 54.543%, and 45mm coir length, which was used for the development of a vehicle’s side view mirror encasement. The optimal solutions for the response parameters are 49.333 Mpa for tensile strength, flexural for 57.118 Mpa, impact strength for 34.787 KJ/M2, young modulus for 4.788 GPa, stress for 4.534 KN, and 20.483 mm for strain. The models that were developed using Design Expert software revealed that the input factors can achieve the response parameters in the system with 94% desirability. The study showed that coir is quite durable for filler material in an epoxy composite for automobile applications and that fiber loading and length have a significant effect on the mechanical behavior of coir fiber-reinforced epoxy composites. The coir's low density, considerable tensile strength, and bio-degradability contribute to its eco-friendliness and potential for reducing the environmental hazards of synthetic automotive components.Keywords: coir, composite, coir fiber, coconut husk, polymer, automobile, mechanical test
Procedia PDF Downloads 63759 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 35758 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources
Authors: Alana E. Jansen
Abstract:
With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.Keywords: diversity, high-performing teams, job demands and resources, team effectiveness
Procedia PDF Downloads 186757 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 108756 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 107755 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 124754 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions
Authors: Rhoda N. Kayongo
Abstract:
As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.Keywords: character development, tertiary institutions, predictive model, behavior analysis
Procedia PDF Downloads 134753 Evaluation of Air Movement, Humidity and Temperature Perceptions with the Occupant Satisfaction in Office Buildings in Hot and Humid Climate Regions by Means of Field Surveys
Authors: Diego S. Caetano, Doreen E. Kalz, Louise L. B. Lomardo, Luiz P. Rosa
Abstract:
The energy consumption in non-residential buildings in Brazil has a great impact on the national infrastructure. The growth of the energy consumption has a special role over the building cooling systems, supported by the increased people's requirements on hygrothermal comfort. This paper presents how the occupants of office buildings notice and evaluate the hygrothermic comfort regarding temperature, humidity, and air movement, considering the cooling systems presented at the buildings studied, analyzed by real occupants in areas of hot and humid climate. The paper presents results collected over a long time from 3 office buildings in the cities of Rio de Janeiro and Niteroi (Brazil) in 2015 and 2016, from daily questionnaires with eight questions answered by 114 people between 3 to 5 weeks per building, twice a day (10 a.m. and 3 p.m.). The paper analyses 6 out of 8 questions, emphasizing on the perception of temperature, humidity, and air movement. Statistics analyses were made crossing participant answers and humidity and temperature data related to time high time resolution time. Analyses were made from regressions comparing: internal and external temperature, and then compared with the answers of the participants. The results were put in graphics combining statistic graphics related to temperature and air humidity with the answers of the real occupants. Analysis related to the perception of the participants to humidity and air movements were also analyzed. The hygrothermal comfort statistic model of the European standard DIN EN 15251 and that from the Brazilian standard NBR 16401 were compared taking into account the perceptions of the hygrothermal comfort of the participants, with emphasis on air humidity, taking basis on prior studies published on this same research. The studies point out a relative tolerance for higher temperatures than the ones determined by the standards, besides a variation on the participants' perception concerning air humidity. The paper presents a group of detailed information that permits to improve the quality of the buildings based on the perception of occupants of the office buildings, contributing to the energy reduction without health damages and demands of necessary hygrothermal comfort, reducing the consumption of electricity on cooling.Keywords: thermal comfort, energy consumption, energy standards, comfort models
Procedia PDF Downloads 321752 Risk Assessment on New Bio-Composite Materials Made from Water Resource Recovery
Authors: Arianna Nativio, Zoran Kapelan, Jan Peter van der Hoek
Abstract:
Bio-composite materials are becoming increasingly popular in various applications, such as the automotive industry. Usually, bio-composite materials are made from natural resources recovered from plants, now, a new type of bio-composite material has begun to be produced in the Netherlands. This material is made from resources recovered from drinking water treatments (calcite), wastewater treatment (cellulose), and material from surface water management (aquatic plants). Surface water, raw drinking water, and wastewater can be contaminated with pathogens and chemical compounds. Therefore, it would be valuable to develop a framework to assess, monitor, and control the potential risks. Indeed, the goal is to define the major risks in terms of human health, quality of materials, and environment associated with the production and application of these new materials. This study describes the general risk assessment framework, starting with a qualitative risk assessment. The qualitative risk analysis was carried out by using the HAZOP methodology for the hazard identification phase. The HAZOP methodology is logical and structured and able to identify the hazards in the first stage of the design when hazards and associated risks are not well known. The identified hazards were analyzed to define the potential associated risks, and then these were evaluated by using the qualitative Event Tree Analysis. ETA is a logical methodology used to define the consequences for a specific hazardous incidents, evaluating the failure modes of safety barriers and dangerous intermediate events that lead to the final scenario (risk). This paper shows the effectiveness of combining of HAZOP and qualitative ETA methodologies for hazard identification and risk mapping. Then, key risks were identified, and a quantitative framework was developed based on the type of risks identified, such as QMRA and QCRA. These two models were applied to assess human health risks due to the presence of pathogens and chemical compounds such as heavy metals into the bio-composite materials. Thus, due to these contaminations, the bio-composite product, during its application, might release toxic substances into the environment leading to a negative environmental impact. Therefore, leaching tests are going to be planned to simulate the application of these materials into the environment and evaluate the potential leaching of inorganic substances, assessing environmental risk.Keywords: bio-composite, risk assessment, water reuse, resource recovery
Procedia PDF Downloads 107751 Forecasting Regional Data Using Spatial Vars
Authors: Taisiia Gorshkova
Abstract:
Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regionsKeywords: forecasting, regional data, spatial econometrics, vector autoregression
Procedia PDF Downloads 141750 Walking the Talk? Thinking and Acting – Teachers' and Practitioners' Perceptions about Physical Activity, Health and Well-Being, Do They 'Walk the Talk' ?
Authors: Kristy Howells, Catherine Meehan
Abstract:
This position paper presents current research findings into the proposed gap between teachers’ and practitioners’ thinking and acting about physical activity health and well-being in childhood. Within the new Primary curriculum, there is a focus on sustained physical activity within a Physical Education and healthy lifestyles in Personal, Health, Social and Emotional lessons, but there is no curriculum guidance about what sustained physical activity is and how it is defined. The current health guidance on birth to five suggests that children should not be inactive for long periods and specify light and energetic activities, however there is the a suggested period of time per day for young children to achieve, but the guidance does not specify how this should be measured. The challenge therefore for teachers and practitioners is their own confidence and understanding of what “good / moderate intensity” physical activity and healthy living looks like for children and the children understanding what they are doing. There is limited research about children from birth to eight years and also the perceptions and attitudes of those who work with this age group of children, however it was found that children at times can identify different levels of activity and it has been found that children can identify healthy foods and good choices for healthy living at a basic level. Authors have also explored teachers’ beliefs about teaching and learning and found that teachers could act in accordance to their beliefs about their subject area only when their subject knowledge, understanding and confidence of that area is high. It has been proposed that confidence and competence of practitioners and teachers to integrate ‘well-being’ within the learning settings has been reported as being low. This may be due to them not having high subject knowledge. It has been suggested that children’s life chances are improved by focusing on well-being in their earliest years. This includes working with parents and families, and being aware of the environmental contexts that may impact on children’s wellbeing. The key is for practitioners and teachers to know how to implement these ideas effectively as these key workers have a profound effect on young children as role models and due to the time of waking hours spent with them. The position paper is part of a longitudinal study at Canterbury Christ Church University and currently we will share the research findings from the initial questionnaire (online, postal, and in person) that explored and evaluated the knowledge, competence and confidence levels of practitioners and teachers as to the structure and planning of sustained physical activity and healthy lifestyles and how this progresses with the children’s age.Keywords: health, perceptions, physical activity, well-being
Procedia PDF Downloads 401749 Professional Development in EFL Classroom: Motivation and Reflection
Authors: Iman Jabbar
Abstract:
Within the scope of professionalism and in order to compete with the modern world, teachers, are expected to develop their teaching skills and activities in addition to their professional knowledge. At the college level, the teacher should be able to face classroom challenges through his engagement with the learning situation to understand the students and their needs. In our field of TESOL, the role of the English teacher is no longer restricted to teaching English texts, but rather he should endeavor to enhance the students’ skills such as communication and critical analysis. Within the literature of professionalism, there are certain strategies and tools that an English teacher should adopt to develop his competence and performance. Reflective practice, which is an exploratory process, is one of these strategies. Another strategy contributing to classroom development is motivation. It is crucial in students’ learning as it affects the quality of learning English in the classroom in addition to determining success or failure as well as language achievement. This is a qualitative study grounded on interpretive perspectives of teachers and students regarding the process of professional development. This study aims at (a) understanding how teachers at the college level conceptualize reflective practice and motivation inside EFL classroom, and (b) exploring the methods and strategies that they implement to practice reflection and motivation. This study and is based on two questions: 1. How do EFL teachers perceive and view reflection and motivation in relation to their teaching and professional development? 2. How can reflective practice and motivation be developed into practical strategies and actions in EFL teachers’ professional context? The study is organized into two parts, theoretical and practical. The theoretical part reviews the literature on the concept of reflective practice and motivation in relation to professional development through providing certain definitions, theoretical models, and strategies. The practical part draws on the theoretical one, however; it is the core of the study since it deals with two issues. It involves the research design, methodology, and methods of data collection, sampling, and data analysis. It ends up with an overall discussion of findings and the researcher's reflections on the investigated topic. In terms of significance, the study is intended to contribute to the field of TESOL at the academic level through the selection of the topic and investigating it from theoretical and practical perspectives. Professional development is the path that leads to enhancing the quality of teaching English as a foreign or second language in a way that suits the modern trends of globalization and advanced technology.Keywords: professional development, motivation, reflection, learning
Procedia PDF Downloads 450748 Preventive Effect of Locoregional Analgesia Techniques on Chronic Post-Surgical Neuropathic Pain: A Prospective Randomized Study
Authors: Beloulou Mohamed Lamine, Bouhouf Attef, Meliani Walid, Sellami Dalila, Lamara Abdelhak
Abstract:
Introduction: Post-surgical chronic pain (PSCP) is a pathological condition with a rather complex etiopathogenesis that extensively involves sensitization processes and neuronal damage. The neuropathic component of these pains is almost always present, with variable expression depending on the type of surgery. Objective: To assess the presumed beneficial effect of Regional Anesthesia-Analgesia Techniques (RAAT) on the development of post-surgical chronic neuropathic pain (PSCNP) in various surgical procedures. Patients and Methods: A comparative study involving 510 patients distributed across five surgical models (mastectomy, thoracotomy, hernioplasty, cholecystectomy, and major abdominal-pelvic surgery) and randomized into two groups: Group A (240) receiving conventional postoperative analgesia and Group B (270) receiving balanced analgesia, including the implementation of a Regional Anesthesia-Analgesia Technique (RAAT). These patients were longitudinally followed over a 6-month period, with post-surgical chronic neuropathic pain (PSCNP) defined by a Neuropathic Pain Score DN2≥ 3. Comparative measurements through univariate and multivariate analyses were performed to identify associations between the development of PSCNP and certain predictive factors, including the presumed preventive impact (protective effect) of RAAT. Results: At the 6th month post-surgery, 419 patients were analyzed (Group A= 196 and Group B= 223). The incidence of PSCNP was 32.2% (n=135). Among these patients with chronic pain, the prevalence of neuropathic pain was 37.8% (95% CI: [29.6; 46.5]), with n=51/135. It was significantly lower in Group B compared to Group A, with respective percentages of 31.4% vs. 48.8% (p-value = 0.035). The most significant differences were observed in breast and thoracopulmonary surgeries. In a multiple regression analysis, two predictors of PSCNP were identified: the presence of preoperative pain at the surgical site as a risk factor (OR: 3.198; 95% CI [1.326; 7.714]) and RAAT as a protective factor (OR: 0.408; 95% CI [0.173; 0.961]). Conclusion: The neuropathic component of PSCNP can be observed in different types of surgeries. Regional analgesia included in a multimodal approach to postoperative pain management has proven to be effective for acute pain and seems to have a preventive impact on the development of PSCNP and its neuropathic nature or component, particularly in surgeries that are more prone to chronicization.Keywords: chronic postsurgical pain, postsurgical chronic neuropathic pain, regional anesthesia and analgesia techniques (RAAT), neuropathic pain score dn2, preventive impact
Procedia PDF Downloads 26747 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 133746 Life Satisfaction of Non-Luxembourgish and Native Luxembourgish Postgraduate Students
Authors: Chrysoula Karathanasi, Senad Karavdic, Angela Odero, Michèle Baumann
Abstract:
It is not only the economic determinants that impact on life conditions, but maintaining a good level of life satisfaction (LS) may also be an important challenge currently. In Luxembourg, university students receive financial aid from the government. They are then registered at the Centre for Documentation and Information on Higher Education (CEDIES). Luxembourg is built on migration with almost half its population consisting of foreigners. It is upon this basis that our research aims to analyze the associations with mental health factors (health satisfaction, psychological quality of life, worry), perceived financial situation, career attitudes (adaptability, optimism, knowledge, planning) and LS, for non-Luxembourgish and native postgraduate students. Between 2012 and 2013, postgraduates registered at CEDIES were contacted by post and asked to participate in an online survey with either the option of English or French. The study population comprised of 644 respondents. Our statistical analysis excluded: those born abroad who had Luxembourgish citizenship, or those born in Luxembourg who did not have citizenship. Two groups were formed one consisting 147 non-Luxembourgish and the other 284 natives. A single item measured LS (1=not at all satisfied to 10=very satisfied). Bivariate tests, correlations and multiple linear regression models were used in which only significant relationships (p<0.05) were integrated. Among the two groups no differences were found between LS indicators (7.8/10 non-Luxembourgish; 8.0/10 natives) as both were higher than the European indicator of 7.2/10 (for 25-34 years). In the case of non-Luxembourgish students, they were older than natives (29.3 years vs. 26.3 years) perceived their financial situation as more difficult, and a higher percentage of their parents had an education level higher than a Bachelor's degree (father 59.2% vs 44.6% for natives; mother 51.4% vs 33.7% for natives). In addition, the father’s education was related to the LS of postgraduates and the higher was the score, the greater was the contribution to LS. Whereas for native students, when their scores of health satisfaction and career optimism were higher, their LS’ score was higher. For both groups their LS was linked to mental health-related factors, perception of their financial situation, career optimism, adaptability and planning. The higher the psychological quality of life score was, the greater the LS of postgraduates’ was. Good health and positive attitudes related to the job market enhanced their LS indicator.Keywords: career attributes, father's education level, life satisfaction, mental health
Procedia PDF Downloads 370745 Network Based Speed Synchronization Control for Multi-Motor via Consensus Theory
Authors: Liqin Zhang, Liang Yan
Abstract:
This paper addresses the speed synchronization control problem for a network-based multi-motor system from the perspective of cluster consensus theory. Each motor is considered as a single agent connected through fixed and undirected network. This paper presents an improved control protocol from three aspects. First, for the purpose of improving both tracking and synchronization performance, this paper presents a distributed leader-following method. The improved control protocol takes the importance of each motor’s speed into consideration, and all motors are divided into different groups according to speed weights. Specifically, by using control parameters optimization, the synchronization error and tracking error can be regulated and decoupled to some extent. The simulation results demonstrate the effectiveness and superiority of the proposed strategy. In practical engineering, the simplified models are unrealistic, such as single-integrator and double-integrator. And previous algorithms require the acceleration information of the leader available to all followers if the leader has a varying velocity, which is also difficult to realize. Therefore, the method focuses on an observer-based variable structure algorithm for consensus tracking, which gets rid of the leader acceleration. The presented scheme optimizes synchronization performance, as well as provides satisfactory robustness. What’s more, the existing algorithms can obtain a stable synchronous system; however, the obtained stable system may encounter some disturbances that may destroy the synchronization. Focus on this challenging technological problem, a state-dependent-switching approach is introduced. In the presence of unmeasured angular speed and unknown failures, this paper investigates a distributed fault-tolerant consensus tracking algorithm for a group non-identical motors. The failures are modeled by nonlinear functions, and the sliding mode observer is designed to estimate the angular speed and nonlinear failures. The convergence and stability of the given multi-motor system are proved. Simulation results have shown that all followers asymptotically converge to a consistent state when one follower fails to follow the virtual leader during a large enough disturbance, which illustrates the good performance of synchronization control accuracy.Keywords: consensus control, distributed follow, fault-tolerant control, multi-motor system, speed synchronization
Procedia PDF Downloads 123744 The Acquisition of /r/ By Setswana-Learning Children
Authors: Keneilwe Matlhaku
Abstract:
Crosslinguistic studies (theoretical and clinical) have shown delays and significant misarticulation in the acquisition of the rhotics. This article provides a detailed analysis of the early development of the rhotic phoneme, an apical trill /r/, by monolingual Setswana (Tswana S30) children of age ranges between 1 and 4 years. The data display the following trends: (1) late acquisition of /r/; (2) a wide range of substitution patterns involving this phoneme (i.e., gliding, coronal stopping, affrication, deletion, lateralization, as well as, substitution to a dental and uvular fricative). The primary focus of the article is on the potential origins of these variations of /r/, even within the same language. Our data comprises naturalistic longitudinal audio recordings of 6 children (2 males and 4 females) whose speech was recorded in their homes over a period of 4 months with no or only minimal disruptions in their daily environments. Phon software (Rose et al. 2013; Rose & MacWhinney 2014) was used to carry out the orthographic and phonetic transcriptions of the children’s data. Phon also enabled the generation of the children’s phonological inventories for comparison with adult target IPA forms. We explain the children’s patterns through current models of phonological emergence (MacWhinney 2015) as well as McAllister Byun, Inkelas & Rose (2016); Rose et al., (2022), which highlight the perceptual and articulatory factors influencing the development of sounds and sound classes. We highlight how the substitution patterns observed in the data can be captured through a consideration of the auditory properties of the target speech sounds, combined with an understanding of the types of articulatory gestures involved in the production of these sounds. These considerations, in turn, highlight some of the most central aspects of the challenges faced by the child toward learning these auditory-articulatory mappings. We provide a cross-linguistic survey of the acquisition of rhotic consonants in a sample of related and unrelated languages in which we show that the variability and volatility in the substitution patterns of /r/ is also brought about by the properties of the children’s ambient languages. Beyond theoretical issues, this article sets an initial foundation for developing speech-language pathology materials and services for Setswana learning children, an emerging area of public service in Botswana.Keywords: rhotic, apical trill, Phon, phonological emergence, auditory, articulatory, mapping
Procedia PDF Downloads 37743 Land Degradation Vulnerability Modeling: A Study on Selected Micro Watersheds of West Khasi Hills Meghalaya, India
Authors: Amritee Bora, B. S. Mipun
Abstract:
Land degradation is often used to describe the land environmental phenomena that reduce land’s original productivity both qualitatively and quantitatively. The study of land degradation vulnerability primarily deals with “Environmentally Sensitive Areas” (ESA) and the amount of topsoil loss due to erosion. In many studies, it is observed that the assessment of the existing status of land degradation is used to represent the vulnerability. Moreover, it is also noticed that in most studies, the primary emphasis of land degradation vulnerability is to assess its sensitivity to soil erosion only. However, the concept of land degradation vulnerability can have different objectives depending upon the perspective of the study. It shows the extent to which changes in land use land cover can imprint their effect on the land. In other words, it represents the susceptibility of a piece of land to degrade its productive quality permanently or in the long run. It is also important to mention that the vulnerability of land degradation is not a single factor outcome. It is a probability assessment to evaluate the status of land degradation and needs to consider both biophysical and human induce parameters. To avoid the complexity of the previous models in this regard, the present study has emphasized on to generate a simplified model to assess the land degradation vulnerability in terms of its current human population pressure, land use practices, and existing biophysical conditions. It is a “Mixed-Method” termed as the land degradation vulnerability index (LDVi). It was originally inspired by the MEDALUS model (Mediterranean Desertification and Land Use), 1999, and Farazadeh’s 2007 revised version of it. It has followed the guidelines of Space Application Center, Ahmedabad / Indian Space Research Organization for land degradation vulnerability. The model integrates the climatic index (Ci), vegetation index (Vi), erosion index (Ei), land utilization index (Li), population pressure index (Pi), and cover management index (CMi) by giving equal weightage to each parameter. The final result shows that the very high vulnerable zone primarily indicates three (3) prominent circumstances; land under continuous population pressure, high concentration of human settlement, and high amount of topsoil loss due to surface runoff within the study sites. As all the parameters of the model are amalgamated with equal weightage further with the help of regression analysis, the LDVi model also provides a strong grasp of each parameter and how far they are competent to trigger the land degradation process.Keywords: population pressure, land utilization, soil erosion, land degradation vulnerability
Procedia PDF Downloads 165742 Sources and Content of Sexual Information among School Going Adolescents in Uganda
Authors: Jonathan Magala
Abstract:
Context: Adolescents in Uganda face significant challenges related to sexual health due to inadequate sexual information. This lack of information puts young people at risk of early pregnancies, sexually transmitted infections, and poverty. Therefore, it is essential to understand the sources, content, and challenges of acquiring sexual information among secondary school-going adolescents in Uganda. Research Aim: The aim of this study was to establish the sources, content, and challenges of acquiring sexual information among secondary school-going adolescents in Luwero Town Council, Uganda. Methodology: This study used a cross-sectional approach with both qualitative and quantitative methods. Questionnaires and in-depth interviews were conducted with 384 school-going adolescents aged between 13-19 years in Luwero Town Council, Uganda. Findings: The results of the study revealed that adolescents receive sexual information from various sources, with schools being the most common source, followed by parents and religious institutions being the least utilized. Adolescents received information on various topics related to sexuality, including puberty and sexual changes, pregnancy and reproduction, STD information, abstinence, and family planning. However, the content of sexual information was inadequate in addressing the challenges facing adolescents, and there were generation gaps, lack of role models, peer influence, and government policies. The male character from all the sources was the least in offering sexual information to adolescents. Theoretical Importance: The study's findings highlight the need for policy implementation to strengthen sexual education in school curriculum, as the sources of sexual information and the content are inadequate. The various topics should be addressed in schools to provide comprehensive education on sexual health for adolescents. Data Collection and Analysis Procedures: Data collection involved questionnaires and in-depth interviews with school-going adolescents. The data gathered were analyzed using descriptive statistics and thematic analysis. Questions Addressed: The study aimed to answer questions about the sources of sexual information among school-going adolescents, the content of sexual information provided, the challenges faced in accessing the information, and the importance of sex education policy implementation. Conclusion: The study concludes that schools are a popular source of sexual information among school-going adolescents in Uganda. However, the content of the information provided is inadequate in addressing the challenges that adolescents face regarding their sexual health. Therefore, policy implementation is essential in strengthening sexual education in the school curriculum and addressing various topics related to sexual health.Keywords: adolescents, sexual information, schools, reproductive health
Procedia PDF Downloads 74741 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74740 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study
Authors: Thomas Arink, Isam Janajreh
Abstract:
The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires
Procedia PDF Downloads 519739 725 Arcadia Street in Pretoria: A Pretoria Case Study Focusing on Urban Acupuncture
Authors: Konrad Steyn, Jacques Laubscher
Abstract:
South African urban design solutions are mostly aligned with European and North American models that are often not appropriate in addressing some of this country’s challenges such as multiculturalism and decaying urban areas. Sustainable urban redevelopment in South Africa should be comprehensive in nature, sensitive in its manifestation, and should be robust and inclusive in order to achieve social relevance. This paper argues that the success of an urban design intervention is largely dependent on the public’s perceptions and expectations, and the way people participate in shaping their environments. The concept of sustainable urbanism is thus more comprehensive than – yet should undoubtedly include – methods of construction, material usage and climate control principles. The case study is a central element of this research paper. 725 Arcadia Street in Pretoria, was originally commissioned as a food market structure. A starkly contrasting existing modernist adjacent building forms the morphological background. Built in 1969, it is a valuable part of Pretoria’s modernist fabric. It was realised early on that the project should not be a mere localised architectural intervention, but rather an occasion to revitalise the neighbourhood through urban regeneration. Because of the complex and comprehensive nature of the site and rich cultural diversity of the area, a multi-faceted approach seemed the most appropriate response. The methodology for collating data consisted of a combination of literature reviews (regarding the historic original fauna and flora and current plants, observation (frequent site visits) and physical surveying on the neighbourhood level (physical location, connectivity to surrounding landmarks as well as movement systems and pedestrian flows). This was followed by an exploratory design phase, culminating in the present redevelopment proposal. Since built environment interventions are increasingly based on generalised normative guidelines, an approach focusing of urban acupuncture could serve as an alternative. Celebrating the specific urban condition, urban acupuncture offers an opportunity to influence the surrounding urban fabric and achieve urban renewal through physical, social and cultural mediation.Keywords: neighbourhood, urban renewal, South African urban design solutions, sustainable urban redevelopment
Procedia PDF Downloads 494738 Investigation on Behaviour of Reinforced Concrete Beam-Column Joints Retrofitted with CFRP
Authors: Ehsan Mohseni
Abstract:
The aim of this thesis is to provide numerical analyses of reinforced concrete beams-column joints with/without CFRP (Carbon Fiber Reinforced Polymer) in order to achieve a better understanding of the behaviour of strengthened beamcolumn joints. A comprehensive literature survey prior to this study revealed that published studies are limited to a handful only; the results are inconclusive and some are even contradictory. Therefore in order to improve on this situation, following that review, a numerical study was designed and performed as presented in this thesis. For the numerical study, dimensions, end supports, and characteristics of the beam and column models were the same as those chosen in an experimental investigation performed previously where ten beamcolumn joint were tested tofailure. Finite element analysis is a useful tool in cases where analytical methods are not capable of solving the problem due to the complexities associated with the problem. The cyclic behaviour of FRP strengthened reinforced concrete beam-columns joints is such a case. Interaction of steel (longitudinal and stirrups), concrete and FRP, yielding of steel bars and stirrups, cracking of concrete, the redistribution of stresses as some elements unload due to crushing or yielding and the confinement of concrete due to the presence of FRP are some of the issues that introduce the complexities into the problem.Numerical solutions, however, can provide further in formation about the behaviour in lieu of the costly experiments or complex closed form solutions. This thesis presents the results of a numerical study on beam-column joints subjected to cyclic loads that are strengthened with CFRP wraps or strrips in a variety of configurations. The analyses are performed by Abaqus finite element program and are calibrated with the experiments. A range of issues in beam-column joints including the cracking load, the ultimate load, lateral load-displacement curves of joints, are investigated.The numerical results for different configurations of strengthening are compared. Finally, the computed numerical results are compared with those obtained from experiments. the cracking load, the ultimate load, lateral load-displacement curves obtained from numerical analysis for all joints were in very good agreement with the corresponding experimental ones.The results obtained from the numerical analysis in most cases implies that this method is conservative and therefore can be used in design applications with confidence.Keywords: numerical analysis, strengthening, CFRP, reinforced concrete joints
Procedia PDF Downloads 347737 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria
Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene
Abstract:
As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.Keywords: capacity, energy, power system, storage
Procedia PDF Downloads 33736 Study on Natural Light Distribution Inside the Room by Using Sudare as an Outside Horizontal Blind in Tropical Country of Indonesia
Authors: Agus Hariyadi, Hiroatsu Fukuda
Abstract:
In tropical country like Indonesia, especially in Jakarta, most of the energy consumption on building is for the cooling system, the second one is from lighting electric consumption. One of the passive design strategy that can be done is optimizing the use of natural light from the sun. In this area, natural light is always available almost every day around the year. Natural light have many effect on building. It can reduce the need of electrical lighting but also increase the external load. Another thing that have to be considered in the use of natural light is the visual comfort from occupant inside the room. To optimize the effectiveness of natural light need some modification of façade design. By using external shading device, it can minimize the external load that introduces into the room, especially from direct solar radiation which is the 80 % of the external energy load that introduces into the building. It also can control the distribution of natural light inside the room and minimize glare in the perimeter zone of the room. One of the horizontal blind that can be used for that purpose is Sudare. It is traditional Japanese blind that have been used long time in Japanese traditional house especially in summer. In its original function, Sudare is used to prevent direct solar radiation but still introducing natural ventilation. It has some physical characteristics that can be utilize to optimize the effectiveness of natural light. In this research, different scale of Sudare will be simulated using EnergyPlus and DAYSIM simulation software. EnergyPlus is a whole building energy simulation program to model both energy consumption—for heating, cooling, ventilation, lighting, and plug and process loads—and water use in buildings, while DAYSIM is a validated, RADIANCE-based daylighting analysis software that models the annual amount of daylight in and around buildings. The modelling will be done in Ladybug and Honeybee plugin. These are two open source plugins for Grasshopper and Rhinoceros 3D that help explore and evaluate environmental performance which will directly be connected to EnergyPlus and DAYSIM engines. Using the same model will maintain the consistency of the same geometry used both in EnergyPlus and DAYSIM. The aims of this research is to find the best configuration of façade design which can reduce the external load from the outside of the building to minimize the need of energy for cooling system but maintain the natural light distribution inside the room to maximize the visual comfort for occupant and minimize the need of electrical energy consumption.Keywords: façade, natural light, blind, energy
Procedia PDF Downloads 344735 The Impact of Climate Change on Typical Material Degradation Criteria over Timurid Historical Heritage
Authors: Hamed Hedayatnia, Nathan Van Den Bossche
Abstract:
Understanding the ways in which climate change accelerates or slows down the process of material deterioration is the first step towards assessing adaptive approaches for the conservation of historical heritage. Analysis of the climate change effects on the degradation risk assessment parameters like freeze-thaw cycles and wind erosion is also a key parameter when considering mitigating actions. Due to the vulnerability of cultural heritage to climate change, the impact of this phenomenon on material degradation criteria with the focus on brick masonry walls in Timurid heritage, located in Iran, was studied. The Timurids were the final great dynasty to emerge from the Central Asian steppe. Through their patronage, the eastern Islamic world in northwestern of Iran, especially in Mashhad and Herat, became a prominent cultural center. Goharshad Mosque is a mosque in Mashhad of the Razavi Khorasan Province, Iran. It was built by order of Empress Goharshad, the wife of Shah Rukh of the Timurid dynasty in 1418 CE. Choosing an appropriate regional climate model was the first step. The outputs of two different climate model: the 'ALARO-0' and 'REMO,' were analyzed to find out which model is more adopted to the area. For validating the quality of the models, a comparison between model data and observations was done in 4 different climate zones in Iran for a period of 30 years. The impacts of the projected climate change were evaluated until 2100. To determine the material specification of Timurid bricks, standard brick samples from a Timurid mosque were studied. Determination of water absorption coefficient, defining the diffusion properties and determination of real density, and total porosity tests were performed to characterize the specifications of brick masonry walls, which is needed for running HAM-simulations. Results from the analysis showed that the threatening factors in each climate zone are almost different, but the most effective factor around Iran is the extreme temperature increase and erosion. In the north-western region of Iran, one of the key factors is wind erosion. In the north, rainfall erosion and mold growth risk are the key factors. In the north-eastern part, in which our case study is located, the important parameter is wind erosion.Keywords: brick, climate change, degradation criteria, heritage, Timurid period
Procedia PDF Downloads 118734 Status Quo Bias: A Paradigm Shift in Policy Making
Authors: Divyansh Goel, Varun Jain
Abstract:
Classical economics works on the principle that people are rational and analytical in their decision making and their choices fall in line with the most suitable option according to the dominant strategy in a standard game theory model. This model has failed at many occasions in estimating the behavior and dealings of rational people, giving proof of some other underlying heuristics and cognitive biases at work. This paper probes into the study of these factors, which fall under the umbrella of behavioral economics and through their medium explore the solution to a problem which a lot of nations presently face. There has long been a wide disparity in the number of people holding favorable views on organ donation and the actual number of people signing up for the same. This paper, in its entirety, is an attempt to shape the public policy which leads to an increase the number of organ donations that take place and close the gap in the statistics of the people who believe in signing up for organ donation and the ones who actually do. The key assumption here is that in cases of cognitive dissonance, where people have an inconsistency due to conflicting views, people have a tendency to go with the default choice. This tendency is a well-documented cognitive bias known as the status quo bias. The research in this project involves an assay of mandated choice models of organ donation with two case studies. The first of an opt-in system of Germany (where people have to explicitly sign up for organ donation) and the second of an opt-out system of Austria (every citizen at the time of their birth is an organ donor and has to explicitly sign up for refusal). Additionally, there has also been presented a detailed analysis of the experiment performed by Eric J. Johnson and Daniel G. Goldstein. Their research as well as many other independent experiments such as that by Tsvetelina Yordanova of the University of Sofia, both of which yield similar results. The conclusion being that the general population has by and large no rigid stand on organ donation and are gullible to status quo bias, which in turn can determine whether a large majority of people will consent to organ donation or not. Thus, in our paper, we throw light on how governments can use status quo bias to drive positive social change by making policies in which everyone by default is marked an organ donor, which will, in turn, save the lives of people who succumb on organ transplantation waitlists and save the economy countless hours of economic productivity.Keywords: behavioral economics, game theory, organ donation, status quo bias
Procedia PDF Downloads 299733 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 362732 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 83731 Neuroblastoma in Children and the Potential Involvement of Viruses in Its Pathogenesis
Authors: Ugo Rovigatti
Abstract:
Neuroblastoma (NBL) has epitomized for at least 40 years our understanding of cancer cellular and molecular biology and its potential applications to novel therapeutic strategies. This includes the discovery of the very first oncogene aberrations and tumorigenesis suppression by differentiation in the 80s; the potential role of suppressor genes in the 90s; the relevance of immunotherapy in the millennium first, and the discovery of additional mutations by NGS technology in the millennium second decade. Similar discoveries were achieved in the majority of human cancers, and similar therapeutic interventions were obtained subsequently to NBL discoveries. Unfortunately, targeted therapies suggested by specific mutations (such as MYCN amplification –MNA- present in ¼ or 1/5 of cases) have not elicited therapeutic successes in aggressive NBL, where the prognosis is still dismal. The reasons appear to be linked to Tumor Heterogeneity, which is particularly evident in NBL but also a clear hallmark of aggressive human cancers generally. The new avenue of cancer immunotherapy (CIT) provided new hopes for cancer patients, but we still ignore the cellular or molecular targets. CIT is emblematic of high-risk disease (HR-NBL) since the mentioned GD2 passive immunotherapy is still providing better survival. We recently critically reviewed and evaluated the literature depicting the genomic landscapes of HR-NBL, coming to the qualified conclusion that among hundreds of affected genes, potential targets, or chromosomal sites, none correlated with anti-GD2 sensitivity. A better explanation is provided by the Micro-Foci inducing Virus (MFV) model, which predicts that neuroblasts infection with the MFV, an RNA virus isolated from a cancer-cluster (space-time association) of HR-NBL cases, elicits the appearance of MNA and additional genomic aberrations with mechanisms resembling chromothripsis. Neuroblasts infected with low titers of MFV amplified MYCN up to 100 folds and became highly transformed and malignant, thus causing neuroblastoma in young rat pups of strains SD and Fisher-344 and larger tumor masses in nu/nu mice. An association was discovered with GD2 since this glycosphingolipid is also the receptor for the family of MFV virus (dsRNA viruses). It is concluded that a dsRNA virus, MFV, appears to provide better explicatory mechanisms for the genesis of i) specific genomic aberrations such as MNA; ii) extensive tumor heterogeneity and chromothripsis; iii) the effects of passive immunotherapy with anti-GD2 monoclonals and that this and similar models should be further investigated in both pediatric and adult cancers.Keywords: neuroblastoma, MYCN, amplification, viruses, GD2
Procedia PDF Downloads 99