Search results for: mode prediction
338 Heat Transfer Dependent Vortex Shedding of Thermo-Viscous Shear-Thinning Fluids
Authors: Markus Rütten, Olaf Wünsch
Abstract:
Non-Newtonian fluid properties can change the flow behaviour significantly, its prediction is more difficult when thermal effects come into play. Hence, the focal point of this work is the wake flow behind a heated circular cylinder in the laminar vortex shedding regime for thermo-viscous shear thinning fluids. In the case of isothermal flows of Newtonian fluids the vortex shedding regime is characterised by a distinct Reynolds number and an associated Strouhal number. In the case of thermo-viscous shear thinning fluids the flow regime can significantly change in dependence of the temperature of the viscous wall of the cylinder. The Reynolds number alters locally and, consequentially, the Strouhal number globally. In the present CFD study the temperature dependence of the Reynolds and Strouhal number is investigated for the flow of a Carreau fluid around a heated cylinder. The temperature dependence of the fluid viscosity has been modelled by applying the standard Williams-Landel-Ferry (WLF) equation. In the present simulation campaign thermal boundary conditions have been varied over a wide range in order to derive a relation between dimensionless heat transfer, Reynolds and Strouhal number. Together with the shear thinning due to the high shear rates close to the cylinder wall this leads to a significant decrease of viscosity of three orders of magnitude in the nearfield of the cylinder and a reduction of two orders of magnitude in the wake field. Yet the shear thinning effect is able to change the flow topology: a complex K´arm´an vortex street occurs, also revealing distinct characteristic frequencies associated with the dominant and sub-dominant vortices. Heating up the cylinder wall leads to a delayed flow separation and narrower wake flow, giving lesser space for the sequence of counter-rotating vortices. This spatial limitation does not only reduce the amplitude of the oscillating wake flow it also shifts the dominant frequency to higher frequencies, furthermore it damps higher harmonics. Eventually the locally heated wake flow smears out. Eventually, the CFD simulation results of the systematically varied thermal flow parameter study have been used to describe a relation for the main characteristic order parameters.Keywords: heat transfer, thermo-viscous fluids, shear thinning, vortex shedding
Procedia PDF Downloads 297337 Iranian English as Foreign Language Teachers' Psychological Well-Being across Gender: During the Pandemic
Authors: Fatemeh Asadi Farsad, Sima Modirkhameneh
Abstract:
The purpose of this study was to explore the pattern of Psychological Well-Being (PWB) of Iranian male and female EFL teachers during the pandemic. It was intended to see if such a drastic change in the context and mode of teaching affects teachers' PWB. Furthermore, the possible difference between the six elements of PWB of Iranian EFL male vs. female teachers during the pandemic was investigated. The other purpose was to find out the EFL teachers’ perceptions of any modifications, and factors leading to such modifications in their PWB during pandemic. For the purpose of this investigation, a total of 81 EFL teachers (59 female, 22 male) with an age range of 25 to 35 were conveniently sampled from different cities in Iran. Ryff’s PWB questionnaire was sent to participant teachers through online platforms to elicit data on their PWB. As for their perceptions on the possible modifications and the factors involved in PWB during pandemic, a set of semi-structured interviews were run among both sample groups. The findings revealed that male EFL teachers had the highest mean on personal growth, followed by purpose of life, and self-acceptance and the lowest mean on environmental mastery. With a slightly similar pattern, female EFL teachers had the highest mean on personal growth, followed by purpose in life, and positive relationship with others with the lowest mean on environmental mastery. However, no significant difference was observed between the male and female groups’ overall means on elements of PWB. Additionally, participants perceived that their anxiety level in online classes altered due to factors like (1) Computer literacy skills, (2) Lack of social communications and interactions with colleagues and students, (3) Online class management, (4) Overwhelming workloads, and (5) Time management. The study ends with further suggestions as regards effective online teaching preparation considering teachers PWB, especially at severe situations such as covid-19 pandemic. The findings offer to determine the reformations of educational policies concerning enhancing EFL teachers’ PWB through computer literacy courses and stress management courses. It is also suggested that to proactively support teachers’ mental health, it is necessary to provide them with advisors and psychologists if possible for free. Limitations: One limitation is the small number of participants (81), suggesting that future replications should include more participants for reliable findings. Another limitation is the gender imbalance, which future studies should address to yield better outcomes. Furthermore, Limited data gathering tools suggest using observations, diaries, and narratives for more insights in future studies. The study focused on one model of PWB, calling for further research on other models in the literature. Considering the wide effect of the COVID-19 pandemic, future studies should consider additional variables (e.g., teaching experience, age, income) to understand Iranian EFL teachers’ vulnerabilities and strengths better.Keywords: online teaching, psychological well-being, female and male EFL teachers, pandemic
Procedia PDF Downloads 47336 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 178335 Data Mining in Healthcare for Predictive Analytics
Authors: Ruzanna Muradyan
Abstract:
Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health
Procedia PDF Downloads 62334 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 341333 Identification of Potent and Selective SIRT7 Anti-Cancer Inhibitor via Structure-Based Virtual Screening and Molecular Dynamics Simulation
Authors: Md. Fazlul Karim, Ashik Sharfaraz, Aysha Ferdoushi
Abstract:
Background: Computational medicinal chemistry approaches are used for designing and identifying new drug-like molecules, predicting properties and pharmacological activities, and optimizing lead compounds in drug development. SIRT7, a nicotinamide adenine dinucleotide (NAD+)-dependent deacylase which regulates aging, is an emerging target for cancer therapy with mounting evidence that SIRT7 downregulation plays important roles in reversing cancer phenotypes and suppressing tumor growth. Activation or altered expression of SIRT7 is associated with the progression and invasion of various cancers, including liver, breast, gastric, prostate, and non-small cell lung cancer. Objectives: The goal of this work was to identify potent and selective bioactive candidate inhibitors of SIRT7 by in silico screening of small molecule compounds obtained from Nigella sativa (N. sativa). Methods: SIRT7 structure was retrieved from The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB), and its active site was identified using CASTp and metaPocket. Molecular docking simulation was performed with PyRx 0.8 virtual screening software. Drug-likeness properties were tested using SwissADME and pkCSM. In silico toxicity was evaluated by Osiris Property Explorer. Bioactivity was predicted by Molinspiration software. Antitumor activity was screened for Prediction of Activity Spectra for Substances (PASS) using Way2Drug web server. Molecular dynamics (MD) simulation was carried out by Desmond v3.6 package. Results: A total of 159 bioactive compounds from the N. Sativa were screened against the SIRT7 enzyme. Five bioactive compounds: chrysin (CID:5281607), pinocembrin (CID:68071), nigellidine (CID:136828302), nigellicine (CID:11402337), and epicatechin (CID:72276) were identified as potent SIRT7 anti-cancer candidates after docking score evaluation and applying Lipinski's Rule of Five. Finally, MD simulation identified Chrysin as the top SIRT7 anti-cancer candidate molecule. Conclusion: Chrysin, which shows a potential inhibitory effect against SIRT7, can act as a possible anti-cancer drug candidate. This inhibitor warrants further evaluation to check its pharmacokinetics and pharmacodynamics properties both in vitro and in vivo.Keywords: SIRT7, antitumor, molecular docking, molecular dynamics simulation
Procedia PDF Downloads 78332 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience
Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi
Abstract:
Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit
Procedia PDF Downloads 129331 Internationalization Process Model for Construction Firms: Stages and Strategies
Authors: S. Ping Ho, R. Dahal
Abstract:
The global economy has drastically changed how firms operate and compete. Although the construction industry is ‘local’ by its nature, the internationalization of the construction industry has become an inevitable reality. As a result of global competition, staying domestic is no longer safe from competition and, on the contrary, to grow and become an MNE (multi-national enterprise) becomes one of the important strategies for a firm to survive in the global competition. For the successful entrance into competing markets, the firms need to re-define their competitive advantages and re-identify the sources of the competitive advantages. A firm’s initiation of internationalization is not necessarily a result of strategic planning but also involves certain idiosyncratic events that pave the path leading to a firm’s internationalization. For example, a local firm’s incidental or unintentional collaboration with an MNE can become the initiating point of its internationalization process. However, because of the intensive competition in today’s global movement, many firms were compelled to initiate their internationalization as a strategic response to the competition. Understandingly stepping in in the process of internationalization and appropriately implementing the strategies (in the process) at different stages lead the construction firms to a successful internationalization journey. This study is carried out to develop a model of the internationalization process, which derives appropriate strategies that the construction firms can implement at each stage. The proposed model integrates two major and complementary views of internationalization and expresses the dynamic process of internationalization in three stages, which are the pre-international (PRE) stage, the foreign direct investment (FDI) stage, and the multi-national enterprise (MNE) stage. The strategies implied in the proposed model are derived, focusing on capability building, market locations, and entry modes based on the resource-based views: value, rareness, imitability, and substitutability (VRIN). With the proposed dynamic process model the potential construction firms which are willing to expand their business market area can be benefitted. Strategies for internationalization, such as core competence strategy, market selection, partner selection, and entry mode strategy, can be derived from the proposed model. The internationalization process is expressed in two different forms. First, we discuss the construction internationalization process, identify the driving factor/s of the process, and explain the strategy formation in the process. Second, we define the stages of internationalization along the process and the corresponding strategies in each stage. The strategies may include how to exploit existing advantages for the competition at the current stage and develop or explore additional advantages appropriate for the next stage. Particularly, the additionally developed advantages will then be accumulated and drive forward the firm’s stage of internationalization, which will further determine the subsequent strategies, and so on and so forth, spiraling up the stages of a higher degree of internationalization. However, the formation of additional strategies for the next stage does not happen automatically, and the strategy evolution is based on the firm’s dynamic capabilities.Keywords: construction industry, dynamic capabilities, internationalization process, internationalization strategies, strategic management
Procedia PDF Downloads 62330 Medicinal Plants: An Antiviral Depository with Complex Mode of Action
Authors: Daniel Todorov, Anton Hinkov, Petya Angelova, Kalina Shishkova, Venelin Tsvetkov, Stoyan Shishkov
Abstract:
Human herpes viruses (HHV) are ubiquitous pathogens with a pandemic spread across the globe. HHV type 1 is the main causative agent of cold sores and fever blisters around the mouth and on the face, whereas HHV type 2 is generally responsible for genital herpes outbreaks. The treatment of both viruses is more or less successful with antivirals from the nucleoside analogues group. Their wide application increasingly leads to the emergence of resistant mutants In the past, medicinal plants have been used to treat a number of infectious and non-infectious diseases. Their diversity and ability to produce the vast variety of secondary metabolites according to the characteristics of the environment give them the potential to help us in our warfare with viral infections. The variable chemical characteristics and complex composition is an advantage in the treatment of herpes since the emergence of resistant mutants is significantly complicated. The screening process is difficult due to the lack of standardization. That is why it is especially important to follow the mechanism of antiviral action of plants. On the one hand, it may be expected to interact with its compounds, resulting in enhanced antiviral effects, and the most appropriate environmental conditions can be chosen to maximize the amount of active secondary metabolites. During our study, we followed the activity of various plant extracts on the viral replication cycle as well as their effect on the extracellular virion. We obtained our results following the logical sequence of the experimental settings - determining the cytotoxicity of the extracts, evaluating the overall effect on viral replication and extracellular virion.During our research, we have screened a variety of plant extracts for their antiviral activity against both virus replication and the virion itself. We investigated the effect of the extracts on the individual stages of the viral replication cycle - viral adsorption, penetration and the effect on replication depending on the time of addition. If there are positive results in the later experiments, we had studied the activity over viral adsorption, penetration and the effect of replication according to the time of addition. Our results indicate that some of the extracts from the Lamium album have several targets. The first stages of the viral life cycle are most affected. Several of our active antiviral agents have shown an effect on extracellular virion and adsorption and penetration processes. Our research over the last decade has shown several curative antiviral plants - some of which are from the Lamiacea family. The rich set of active ingredients of the plants in this family makes them a good source of antiviral preparation.Keywords: human herpes virus, antiviral activity, Lamium album, Nepeta nuda
Procedia PDF Downloads 154329 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 236328 Telemedicine Services in Ophthalmology: A Review of Studies
Authors: Nasim Hashemi, Abbas Sheikhtaheri
Abstract:
Telemedicine is the use of telecommunication and information technologies to provide health care services that would often not be consistently available in distant rural communities to people at these remote areas. Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Thus, teleophthalmology can overcome geographical barriers and improve quality, access, and affordability of eye health care services. Since teleophthalmology has been widespread applied in recent years, the aim of this study was to determine the different applications of teleophthalmology in the world. To this end, three bibliographic databases (Medline, ScienceDirect, Scopus) were comprehensively searched with these keywords: eye care, eye health care, primary eye care, diagnosis, detection, and screening of different eye diseases in conjunction with telemedicine, telehealth, teleophthalmology, e-services, and information technology. All types of papers were included in the study with no time restriction. We conducted the search strategies until 2015. Finally 70 articles were surveyed. We classified the results based on the’type of eye problems covered’ and ‘the type of telemedicine services’. Based on the review, from the ‘perspective of health care levels’, there are three level for eye health care as primary, secondary and tertiary eye care. From the ‘perspective of eye care services’, the main application of teleophthalmology in primary eye care was related to the diagnosis of different eye diseases such as diabetic retinopathy, macular edema, strabismus and aged related macular degeneration. The main application of teleophthalmology in secondary and tertiary eye care was related to the screening of eye problems i.e. diabetic retinopathy, astigmatism, glaucoma screening. Teleconsultation between health care providers and ophthalmologists and also education and training sessions for patients were other types of teleophthalmology in world. Real time, store–forward and hybrid methods were the main forms of the communication from the perspective of ‘teleophthalmology mode’ which is used based on IT infrastructure between sending and receiving centers. In aspect of specialists, early detection of serious aged-related ophthalmic disease in population, screening of eye disease processes, consultation in an emergency cases and comprehensive eye examination were the most important benefits of teleophthalmology. Cost-effectiveness of teleophthalmology projects resulted from reducing transportation and accommodation cost, access to affordable eye care services and receiving specialist opinions were also the main advantages of teleophthalmology for patients. Teleophthalmology brings valuable secondary and tertiary care to remote areas. So, applying teleophthalmology for detection, treatment and screening purposes and expanding its use in new applications such as eye surgery will be a key tool to promote public health and integrating eye care to primary health care.Keywords: applications, telehealth, telemedicine, teleophthalmology
Procedia PDF Downloads 374327 Adverse Childhood Experience of Domestic Violence and Domestic Mental Health Leading to Youth Violence: An Analysis of Selected Boroughs in London
Authors: Sandra Smart-Akande, Chaminda Hewage, Imtiaz Khan, Thanuja Mallikarachchi
Abstract:
According to UK police-recorded data, there has been a substantial increase in knife-related crime and youth violence in the UK since 2014 particularly in the London boroughs. These crime rates are disproportionally distributed across London with the majority of these crimes occurring in the highly deprived areas of London and among young people aged 11 to 24 with large discrepancies across ethnicity, age, gender and borough of residence. Comprehensive studies and literature have identified risk factors associated with a knife carrying among youth to be Adverse Childhood Experience (ACEs), poor mental health, school or social exclusion, drug dealing, drug using, victim of violent crime, bullying, peer pressure or gang involvement, just to mention a few. ACEs are potentially traumatic events that occur in childhood, this can be experiences or stressful events in the early life of a child and can lead to an increased risk of damaging health or social outcomes in the latter life of the individual. Research has shown that children or youths involved in youth violence have had childhood experience characterised by disproportionate adverse childhood experiences and substantial literature link ACEs to be associated with criminal or delinquent behavior. ACEs are commonly grouped by researchers into: Abuse (Physical, Verbal, Sexual), Neglect (Physical, Emotional) and Household adversities (Mental Illness, Incarcerated relative, Domestic violence, Parental Separation or Bereavement). To the author's best knowledge, no study to date has investigated how household mental health (mental health of a parent or mental health of a child) and domestic violence (domestic violence on a parent or domestic violence on a child) is related to knife homicides across the local authorities areas of London. This study seeks to address the gap by examining a large sample of data from the London Metropolitan Police Force and Characteristics of Children in Need data from the UK Department for Education. The aim of this review is to identify and synthesise evidence from data and a range of literature to identify the relationship between adverse childhood experiences and youth violence in the UK. Understanding the link between ACEs and future outcomes can support preventative action.Keywords: adverse childhood experiences, domestic violence, mental health, youth violence, prediction analysis, London knife crime
Procedia PDF Downloads 119326 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels
Authors: Lorenzo Petrucci
Abstract:
This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration
Procedia PDF Downloads 174325 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound
Procedia PDF Downloads 366324 Analysis of Distance Travelled by Plastic Consumables Used in the First 24 Hours of an Intensive Care Admission: Impacts and Methods of Mitigation
Authors: Aidan N. Smallwood, Celestine R. Weegenaar, Jack N. Evans
Abstract:
The intensive care unit (ICU) is a particularly resource heavy environment, in terms of staff, drugs and equipment required. Whilst many areas of the hospital are attempting to cut down on plastic use and minimise their impact on the environment, this has proven challenging within the confines of intensive care. Concurrently, as globalization has progressed over recent decades, there has been a tendency towards centralised manufacturing with international distribution networks for products, often covering large distances. In this study, we have modelled the standard consumption of plastic single-use items over the course of the first 24-hours of an average individual patient’s stay in a 12 bed ICU in the United Kingdom (UK). We have identified the country of manufacture and calculated the minimum possible distance travelled by each item from factory to patient. We have assumed direct transport via the shortest possible straight line from country of origin to the UK and have not accounted for transport within either country. Assuming an intubated patient with invasive haemodynamic monitoring and central venous access, there are a total of 52 distincts, largely plastic, disposable products which would reasonably be required in the first 24-hours after admission. Each product type has only been counted once to account for multiple items being shipped as one package. Travel distances from origin were summed to give the total distance combined for all 52 products. The minimum possible total distance travelled from country of origin to the UK for all types of product was 273,353 km, equivalent to 6.82 circumnavigations of the globe, or 71% of the way to the moon. The mean distance travelled was 5,256 km, approximately the distance from London to Mecca. With individual packaging for each item, the total weight of consumed products was 4.121 kg. The CO2 produced shipping these items by air freight would equate to 30.1 kg, however doing the same by sea would produce 0.2 kg CO2. Extrapolating these results to the 211,932 UK annual ICU admissions (2018-2019), even with the underestimates of distance and weight of our assumptions, air freight would account for 6586 tons CO2 emitted annually, approximately 130 times that of sea freight. Given the drive towards cost saving within the UK health service, and the decline of the local manufacturing industry, buying from intercontinental manufacturers is inevitable However, transporting all consumables by sea where feasible would be environmentally beneficial, as well as being less costly than air freight. At present, the NHS supply chain purchases from medical device companies, and there is no freely available information as to the transport mode used to deliver the product to the UK. This must be made available to purchasers in order to give a fuller picture of life cycle impact and allow for informed decision making in this regard.Keywords: CO2, intensive care, plastic, transport
Procedia PDF Downloads 178323 An Assessment of Digital Platforms, Student Online Learning, Teaching Pedagogies, Research and Training at Kenya College of Accounting University
Authors: Jasmine Renner, Alice Njuguna
Abstract:
The booming technological revolution is driving a change in the mode of delivery systems especially for e-learning and distance learning in higher education. The report and findings of the study; an assessment of digital platforms, student online learning, teaching pedagogies, research and training at Kenya College of Accounting University (hereinafter 'KCA') was undertaken as a joint collaboration project between the Carnegie African Diaspora Fellowship and input from the staff, students and faculty at KCA University. The participants in this assessment/research met for selected days during a six-week period during which, one-one consultations, surveys, questionnaires, foci groups, training, and seminars were conducted to ascertain 'online learning and teaching, curriculum development, research and training at KCA.' The project was organized into an eight-week project workflow with each week culminating in project activities designed to assess digital online teaching and learning at KCA. The project also included the training of distance learning instructors at KCA and the evaluation of KCA’s distance platforms and programs. Additionally, through a curriculum audit and redesign, the project sought to enhance the curriculum development activities related to of distance learning at KCA. The findings of this assessment/research represent the systematic deliberate process of gathering, analyzing and using data collected from DL students, DL staff and lecturers and a librarian personnel in charge of online learning resources and access at KCA. We engaged in one-on-one interviews and discussions with staff, students, and faculty and collated the findings to inform practices that are effective in the ongoing design and development of eLearning earning at KCA University. Overall findings of the project led to the following recommendations. First, there is a need to address infrastructural challenges that led to poor internet connectivity for online learning, training needs and content development for faculty and staff. Second, there is a need to manage cultural impediments within KCA; for example fears of vital change from one platform to another for effectiveness and Institutional goodwill as a vital promise of effective online learning. Third, at a practical and short-term level, the following recommendations based on systematic findings of the research conducted were as follows: there is a need for the following to be adopted at KCA University to promote the effective adoption of online learning: a) an eLearning compatible faculty lab, b) revision of policy to include an eLearn strategy or strategic management, c) faculty and staff recognitions engaged in the process of training for the adoption and implementation of eLearning and d) adequate website resources on eLearning. The report and findings represent a comprehensive approach to a systematic assessment of online teaching and learning, research and training at KCA.Keywords: e-learning, digital platforms, student online learning, online teaching pedagogies
Procedia PDF Downloads 191322 Risk Based Inspection and Proactive Maintenance for Civil and Structural Assets in Oil and Gas Plants
Authors: Mohammad Nazri Mustafa, Sh Norliza Sy Salim, Pedram Hatami Abdullah
Abstract:
Civil and structural assets normally have an average of more than 30 years of design life. Adding to this advantage, the assets are normally subjected to slow degradation process. Due to the fact that repair and strengthening work for these assets are normally not dependent on plant shut down, the maintenance and integrity restoration of these assets are mostly done based on “as required” and “run to failure” basis. However unlike other industries, the exposure in oil and gas environment is harsher as the result of corrosive soil and groundwater, chemical spill, frequent wetting and drying, icing and de-icing, steam and heat, etc. Due to this type of exposure and the increasing level of structural defects and rectification in line with the increasing age of plants, assets integrity assessment requires a more defined scope and procedures that needs to be based on risk and assets criticality. This leads to the establishment of risk based inspection and proactive maintenance procedure for civil and structural assets. To date there is hardly any procedure and guideline as far as integrity assessment and systematic inspection and maintenance of civil and structural assets (onshore) are concerned. Group Technical Solutions has developed a procedure and guideline that takes into consideration credible failure scenario, assets risk and criticality from process safety and structural engineering perspective, structural importance, modeling and analysis among others. Detailed inspection that includes destructive and non-destructive tests (DT & NDT) and structural monitoring is also being performed to quantify defects, assess severity and impact on integrity as well as identify the timeline for integrity restoration. Each defect and its credible failure scenario is assessed against the risk on people, environment, reputation and production loss. This technical paper is intended to share on the established procedure and guideline and their execution in oil & gas plants. In line with the overall roadmap, the procedure and guideline will form part of specialized solutions to increase production and to meet the “Operational Excellence” target while extending service life of civil and structural assets. As the result of implementation, the management of civil and structural assets is now more systematically done and the “fire-fighting” mode of maintenance is being gradually phased out and replaced by a proactive and preventive approach. This technical paper will also set the criteria and pose the challenge to the industry for innovative repair and strengthening methods for civil & structural assets in oil & gas environment, in line with safety, constructability and continuous modification and revamp of plant facilities to meet production demand.Keywords: assets criticality, credible failure scenario, proactive and preventive maintenance, risk based inspection
Procedia PDF Downloads 404321 Multicollinearity and MRA in Sustainability: Application of the Raise Regression
Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez
Abstract:
Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.Keywords: multicollinearity, MRA, interaction, raise
Procedia PDF Downloads 104320 Ultrasound Assisted Alkaline Potassium Permanganate Pre-Treatment of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Lignocellulose is the largest reservoir of inexpensive, renewable source of carbon. It is composed of lignin, cellulose and hemicellulose. Cellulose and hemicellulose is composed of reducing sugars glucose, xylose and several other monosaccharides which can be metabolised by microorganisms to produce several value added products such as biofuels, enzymes, aminoacids etc. Enzymatic treatment of lignocellulose leads to the release of monosaccharides such as glucose and xylose. However, factors such as the presence of lignin, crystalline cellulose, acetyl groups, pectin etc. contributes to recalcitrance restricting the effective enzymatic hydrolysis of cellulose and hemicellulose. In order to overcome these problems, pre-treatment of lignocellulose is generally carried out which essentially facilitate better degradation of lignocellulose. A range of pre-treatment strategy is commonly employed based on its mode of action viz. physical, chemical, biological and physico-chemical. However, existing pretreatment strategies result in lower sugar yield and formation of inhibitory compounds. In order to overcome these problems, we proposes a novel pre-treatment, which utilises the superior oxidising capacity of alkaline potassium permanganate assisted by ultra-sonication to break the covalent bonds in spent coffee waste to remove recalcitrant compounds such as lignin. The pre-treatment was conducted for 30 minutes using 2% (w/v) potassium permanganate at room temperature with solid to liquid ratio of 1:10. The pre-treated spent coffee waste (SCW) was subjected to enzymatic hydrolysis using enzymes cellulase and hemicellulase. Shake flask experiments were conducted with a working volume of 50mL buffer containing 1% substrate. The results showed that the novel pre-treatment strategy yielded 7 g/L of reducing sugar as compared to 3.71 g/L obtained from biomass that had undergone dilute acid hydrolysis after 24 hours. From the results obtained it is fairly certain that ultrasonication assists the oxidation of recalcitrant components in lignocellulose by potassium permanganate. Enzyme hydrolysis studies suggest that ultrasound assisted alkaline potassium permanganate pre-treatment is far superior over treatment by dilute acid. Furthermore, SEM, XRD and FTIR were carried out to analyse the effect of the new pre-treatment strategy on structure and crystallinity of pre-treated spent coffee wastes. This novel one-step pre-treatment strategy was implemented under mild conditions and exhibited high efficiency in the enzymatic hydrolysis of spent coffee waste. Further study and scale up is in progress in order to realise future industrial applications.Keywords: spent coffee waste, alkaline potassium permanganate, ultra-sonication, physical characterisation
Procedia PDF Downloads 357319 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model
Authors: Kang Lin Peng
Abstract:
Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature
Procedia PDF Downloads 128318 Identification of Peroxisome Proliferator-Activated Receptors α/γ Dual Agonists for Treatment of Metabolic Disorders, Insilico Screening, and Molecular Dynamics Simulation
Authors: Virendra Nath, Vipin Kumar
Abstract:
Background: TypeII Diabetes mellitus is a foremost health problem worldwide, predisposing to increased mortality and morbidity. Undesirable effects of the current medications have prompted the researcher to develop more potential drug(s) against the disease. The peroxisome proliferator-activated receptors (PPARs) are members of the nuclear receptors family and take part in a vital role in the regulation of metabolic equilibrium. They can induce or repress genes associated with adipogenesis, lipid, and glucose metabolism. Aims: Investigation of PPARα/γ agonistic hits were screened by hierarchical virtual screening followed by molecular dynamics simulation and knowledge-based structure-activity relation (SAR) analysis using approved PPAR α/γ dual agonist. Methods: The PPARα/γ agonistic activity of compounds was searched by using Maestro through structure-based virtual screening and molecular dynamics (MD) simulation application. Virtual screening of nuclear-receptor ligands was done, and the binding modes with protein-ligand interactions of newer entity(s) were investigated. Further, binding energy prediction, Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit along with the structural comparative analysis of approved PPARα/γ agonists with screened hit was done for knowledge-based SAR. Results and Discussion: The silicone chip-based approach recognized the most capable nine hits and had better predictive binding energy as compared to the reference drug compound (Tesaglitazar). In this study, the key amino acid residues of binding pockets of both targets PPARα/γ were acknowledged as essential and were found to be associated in the key interactions with the most potential dual hit (ChemDiv-3269-0443). Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit and found root mean square deviation (RMSD) stabile around 2Å and 2.1Å, respectively. Frequency distribution data also revealed that the key residues of both proteins showed maximum contacts with a potent hit during the MD simulation of 20 nanoseconds (ns). The knowledge-based SAR studies of PPARα/γ agonists were studied using 2D structures of approved drugs like aleglitazar, tesaglitazar, etc. for successful designing and synthesis of compounds PPARγ agonistic candidates with anti-hyperlipidimic potential.Keywords: computational, diabetes, PPAR, simulation
Procedia PDF Downloads 103317 Towards the Development of Uncertainties Resilient Business Model for Driving the Solar Panel Industry in Nigeria Power Sector
Authors: Balarabe Z. Ahmad, Anne-Lorène Vernay
Abstract:
The emergence of electricity in Nigeria was dated back to 1896. The power plants have the potential to generate 12,522 MW of electric power. Whereas current dispatch is about 4,000 MW, access to electrification is about 60%, with consumption at 0.14 MWh/capita. The government embarked on energy reforms to mitigate energy poverty. The reform targeted the provision of electricity access to 75% of the population by 2020 and 90% by 2030. Growth of total electricity demand by a factor of 5 by 2035 had been projected. This means that Nigeria will require almost 530 TWh of electricity which can be delivered through generators with a capacity of 65 GW. Analogously, the geographical location of Nigeria has placed it in an advantageous position as the source of solar energy; the availability of a high sunshine belt is obvious in the country. The implication is that the far North, where energy poverty is high, equally has about twice the solar radiation as against southern Nigeria. Hence, the chance of generating solar electricity is 66% possible at 11850 x 103 GWh per year, which is one hundred times the current electricity consumption rate in the country. Harvesting these huge potentials may be a mirage if the entrepreneurs in the solar panel business are left with the conventional business models that are not uncertainty resilient. Currently, business entities in RE in Nigeria are uncertain of; accessing the national grid, purchasing potentials of cooperating organizations, currency fluctuation and interest rate increases. Uncertainties such as the security of projects and government policy are issues entrepreneurs must navigate to remain sustainable in the solar panel industry in Nigeria. The aim of this paper is to identify how entrepreneurial firms consider uncertainties in developing workable business models for commercializing solar energy projects in Nigeria. In an attempt to develop a novel business model, the paper investigated how entrepreneurial firms assess and navigate uncertainties. The roles of key stakeholders in helping entrepreneurs to manage uncertainties in the Nigeria RE sector were probed in the ongoing study. The study explored empirical uncertainties that are peculiar to RE entrepreneurs in Nigeria. A mixed-mode of research was embraced using qualitative data from face-to-face interviews conducted on the Solar Energy Entrepreneurs and the experts drawn from key stakeholders. Content analysis of the interview was done using Atlas. It is a nine qualitative tool. The result suggested that all stakeholders are required to synergize in developing an uncertainty resilient business model. It was opined that the RE entrepreneurs need modifications in the business recommendations encapsulated in the energy policy in Nigeria to strengthen their capability in delivering solar energy solutions to the yawning Nigerians.Keywords: uncertainties, entrepreneurial, business model, solar-panel
Procedia PDF Downloads 148316 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 301315 Grain Size Statistics and Depositional Pattern of the Ecca Group Sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa
Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicates the dominance of low energy environment. The bivariate plots that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function (LDF) analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are fluvial (deltaic) deposits. The graphic mean value shows the dominance of fine sand-size particles, which point to relatively low energy conditions of deposition. In addition, the LDF results point to low energy conditions during the deposition of the Prince Albert, Collingham and part of the Ripon Formation (Pluto Vale and Wonderfontein Shale Members), whereas the Trumpeters Member of the Ripon Formation and the overlying Fort Brown Formation accumulated under high energy conditions. The CM pattern shows a clustered distribution of sediments in the PQ and QR segments, indicating that the sediments were deposited mostly by suspension and rolling/saltation, and graded suspension. Furthermore, the plots also show that the sediments are mainly deposited by turbidity currents. Visher diagrams show the variability of hydraulic depositional conditions for the Permian Ecca Group sandstones. Saltation is the major process of transportation, although suspension and traction also played some role during deposition of the sediments. The sediments were mainly in saltation and suspension before being deposited.Keywords: grain size analysis, hydrodynamic condition, depositional environment, Ecca Group, South Africa
Procedia PDF Downloads 480314 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 571313 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal
Authors: A. D. Rao, Sachiko Mohanty
Abstract:
The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal
Procedia PDF Downloads 170312 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)
Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri
Abstract:
Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase
Procedia PDF Downloads 230311 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 174310 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia
Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos
Abstract:
Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.Keywords: logging, growth model, cutting cycle, minimum logging diameter
Procedia PDF Downloads 88309 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco
Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi
Abstract:
In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco
Procedia PDF Downloads 461