Search results for: alternative resolution disputes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5099

Search results for: alternative resolution disputes

779 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 102
778 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 122
777 Maternal and Neonatal Outcome Analysis in Preterm Abdominal Delivery Underwent Umbilical Cord Milking Compared to Early Cord Clamping

Authors: Herlangga Pramaditya, Agus Sulistyono, Risa Etika, Budiono Budiono, Alvin Saputra

Abstract:

Preterm birth and anemia of prematurity are the most common cause of morbidity and mortality in neonates, and anemia of the preterm neonates has become a major issue. The timing of umbilical cord clamping after a baby is born determines the amount of blood transferred from the placenta to fetus, Delayed Cord Clamping (DCC) has proven to prevent anemia in the neonates but it is constrained concern regarding the delayed in neonatal resuscitation. Umbilical Cord Milking (UCM) could be an alternative method for clamping the umbilical cord due to the active blood transfer from the placenta to the fetus. The aim of this study was to analyze the difference between maternal and neonatal outcome in preterm abdominal delivery who underwent UCM compared to ECC. This was an experimental study with randomized post-test only control design. Analyzed maternal and neonatal outcomes, significant P values (P <0.05). Statistical comparison was carried out using Paired Samples t-test (α two tailed 0,05). The result was the mean of preoperative mother’s hemoglobin in UCM group compared to ECC (10,9 + 0,9 g/dL vs 10,4 + 0,9 g/dL) and postoperative (11,1 + 1,1 g/dL vs 10,5 + 0,7 g/dL), the delta was (0,2 + 0,7 vs 0,1 + 0,6.). It showed no significant difference (P=0,395 vs 0,627). The mean of 3rd phase labor duration in UCM group vs ECC was (20,5 + 3,5 second vs 21,1 + 3,3 second), showed insignificant difference (P=0,634). The amount of bleeding after delivery in UCM group compared to ECC has the median of 190 cc (100-280cc) vs 210 cc (150-330 cc) showed insignificant difference (P=0,083) so the incidence of post-partum bleeding was not found. The mean of the neonates hemoglobin, hematocrit and erythrocytes of UCM group compared to ECC was (19,3 + 0,7 vs 15,9 + 0,8 g/dl), (57,1 + 3,6 % vs 47,2 + 2,8 %), and (5,4 + 0,4 g/dl vs 4,5 + 0,3 g/dl) showed significant difference (P<0,0001). There was no baby in UCM group received blood transfusion and one baby in the control ECC group received blood transfusion was found. Umbilical Cord Milking has shown to increase the baby’s blood component such as hemoglobin, hematocrit, and erythrocytes 6 hours after birth as well as lowering the incidence of blood transfusions. Maternal and neonatal morbidity were not found. Umbilical Cord Milking was the act of clamping the umbilical cord that was more beneficial to the baby and no adverse or negative effects on the mother.

Keywords: umbilical cord milking, early cord clamping, maternal and neonatal outcome, preterm, abdominal delivery

Procedia PDF Downloads 224
776 A Proposed Treatment Protocol for the Management of Pars Interarticularis Pathology in Children and Adolescents

Authors: Paul Licina, Emma M. Johnston, David Lisle, Mark Young, Chris Brady

Abstract:

Background: Lumbar pars pathology is a common cause of pain in the growing spine. It can be seen in young athletes participating in at-risk sports and can affect sporting performance and long-term health due to its resistance to traditional management. There is a current lack of consensus of classification and treatment for pars injuries. Previous systems used CT to stage pars defects but could not assess early stress reactions. A modified classification is proposed that considers findings on MRI, significantly improving early treatment guidance. The treatment protocol is designed for patients aged 5 to 19 years. Method: Clinical screening identifies patients with a low, medium, or high index of suspicion for lumbar pars injury using patient age, sport participation and pain characteristics. MRI of the at-risk cohort enables augmentation of existing CT-based classification while avoiding ionising radiation. Patients are classified into five categories based on MRI findings. A type 0 lesion (stress reaction) is present when CT is normal and MRI shows high signal change (HSC) in the pars/pedicle on T2 images. A type 1 lesion represents the ‘early defect’ CT classification. The group previously referred to as a 'progressive stage' defect on CT can be split into 2A and 2B categories. 2As have HSC on MRI, whereas 2Bs do not. This distinction is important with regard to healing potential. Type 3 lesions are terminal stage defects on CT, characterised by pseudarthrosis. MRI shows no HSC. Results: Stress reactions (type 0) and acute fractures (1 and 2a) can heal and are treated in a custom-made hard brace for 12 weeks. It is initially worn 23 hours per day. At three weeks, patients commence basic core rehabilitation. At six weeks, in the absence of pain, the brace is removed for sleeping. Exercises are progressed to positions of daily living. Patients with continued pain remain braced 23 hours per day without exercise progression until becoming symptom-free. At nine weeks, patients commence supervised exercises out of the brace for 30 minutes each day. This allows them to re-learn muscular control without rigid support of the brace. At 12 weeks, bracing ceases and MRI is repeated. For patients with near or complete resolution of bony oedema and healing of any cortical defect, rehabilitation is focused on strength and conditioning and sport-specific exercise for the full return to activity. The length of this final stage is approximately nine weeks but depends on factors such as development and level of sports participation. If significant HSC remains on MRI, CT scan is considered to definitively assess cortical defect healing. For these patients, return to high-risk sports is delayed for up to three months. Chronic defects (2b and 3) cannot heal and are not braced, and rehabilitation follows traditional protocols. Conclusion: Appropriate clinical screening and imaging with MRI can identify pars pathology early. In those with potential for healing, we propose hard bracing and appropriate rehabilitation as part of a multidisciplinary management protocol. The validity of this protocol will be tested in future studies.

Keywords: adolescents, MRI classification, pars interticularis, treatment protocol

Procedia PDF Downloads 139
775 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress

Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin

Abstract:

Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.

Keywords: acceptance, coping strategies, stress, validation process

Procedia PDF Downloads 328
774 The Regulation of Reputational Information in the Sharing Economy

Authors: Emre Bayamlıoğlu

Abstract:

This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.

Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy

Procedia PDF Downloads 452
773 Impact of Land-Use and Climate Change on the Population Structure and Distribution Range of the Rare and Endangered Dracaena ombet and Dobera glabra in Northern Ethiopia

Authors: Emiru Birhane, Tesfay Gidey, Haftu Abrha, Abrha Brhan, Amanuel Zenebe, Girmay Gebresamuel, Florent Noulèkoun

Abstract:

Dracaena ombet and Dobera glabra are two of the most rare and endangered tree species in dryland areas. Unfortunately, their sustainability is being compromised by different anthropogenic and natural factors. However, the impacts of ongoing land use and climate change on the population structure and distribution of the species are less explored. This study was carried out in the grazing lands and hillside areas of the Desa'a dry Afromontane forest, northern Ethiopia, to characterize the population structure of the species and predict the impact of climate change on their potential distributions. In each land-use type, abundance, diameter at breast height, and height of the trees were collected using 70 sampling plots distributed over seven transects spaced one km apart. The geographic coordinates of each individual tree were also recorded. The results showed that the species populations were characterized by low abundance and unstable population structure. The latter was evinced by a lack of seedlings and mature trees. The study also revealed that the total abundance and dendrometric traits of the trees were significantly different between the two land uses. The hillside areas had a denser abundance of bigger and taller trees than the grazing lands. Climate change predictions using the MaxEnt model highlighted that future temperature increases coupled with reduced precipitation would lead to significant reductions in the suitable habitats of the species in northern Ethiopia. The species' suitable habitats were predicted to decline by 48–83% for D. ombet and 35–87% for D. glabra. Hence, to sustain the species populations, different strategies should be adopted, namely the introduction of alternative livelihoods (e.g., gathering NTFP) to reduce the overexploitation of the species for subsistence income and the protection of the current habitats that will remain suitable in the future using community-based exclosures. Additionally, the preservation of the species' seeds in gene banks is crucial to ensure their long-term conservation.

Keywords: grazing lands, hillside areas, land-use change, MaxEnt, range limitation, rare and endangered tree species

Procedia PDF Downloads 66
772 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit

Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira

Abstract:

Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.

Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing

Procedia PDF Downloads 133
771 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 148
770 Design Thinking and Project-Based Learning: Opportunities, Challenges, and Possibilities

Authors: Shoba Rathilal

Abstract:

High unemployment rates and a shortage of experienced and qualified employees appear to be a paradox that currently plagues most countries worldwide. In a developing country like South Africa, the rate of unemployment is reported to be approximately 35%, the highest recorded globally. At the same time, a countrywide deficit in experienced and qualified potential employees is reported in South Africa, which is causing fierce rivalry among firms. Employers have reported that graduates are very rarely able to meet the demands of the job as there are gaps in their knowledge and conceptual understanding and other 21st-century competencies, attributes, and dispositions required to successfully negotiate the multiple responsibilities of employees in organizations. In addition, the rates of unemployment and suitability of graduates appear to be skewed by race and social class, the continued effects of a legacy of inequitable educational access. Higher Education in the current technologically advanced and dynamic world needs to serve as an agent of transformation, aspiring to develop graduates to be creative, flexible, critical, and with entrepreneurial acumen. This requires that higher education curricula and pedagogy require a re-envisioning of our selection, sequencing, and pacing of the learning, teaching, and assessment. At a particular Higher education Institution in South Africa, Design Thinking and Project Based learning are being adopted as two approaches that aim to enhance the student experience through the provision of a “distinctive education” that brings together disciplinary knowledge, professional engagement, technology, innovation, and entrepreneurship. Using these methodologies forces the students to solve real-time applied problems using various forms of knowledge and finding innovative solutions that can result in new products and services. The intention is to promote the development of skills for self-directed learning, facilitate the development of self-awareness, and contribute to students being active partners in the application and production of knowledge. These approaches emphasize active and collaborative learning, teamwork, conflict resolution, and problem-solving through effective integration of theory and practice. In principle, both these approaches are extremely impactful. However, at the institution in this study, the implementation of the PBL and DT was not as “smooth” as anticipated. This presentation reports on the analysis of the implementation of these two approaches within higher education curricula at a particular university in South Africa. The study adopts a qualitative case study design. Data were generated through the use of surveys, evaluation feedback at workshops, and content analysis of project reports. Data were analyzed using document analysis, content, and thematic analysis. Initial analysis shows that the forces constraining the implementation of PBL and DT range from the capacity to engage with DT and PBL, both from staff and students, educational contextual realities of higher education institutions, administrative processes, and resources. At the same time, the implementation of DT and PBL was enabled through the allocation of strategic funding and capacity development workshops. These factors, however, could not achieve maximum impact. In addition, the presentation will include recommendations on how DT and PBL could be adapted for differing contexts will be explored.

Keywords: design thinking, project based learning, innovative higher education pedagogy, student and staff capacity development

Procedia PDF Downloads 60
769 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 126
768 Multi-Criteria Assessment of Biogas Feedstock

Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough

Abstract:

Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.

Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy

Procedia PDF Downloads 436
767 Transcriptomic Analysis for Differential Expression of Genes Involved in Secondary Metabolite Production in Narcissus Bulb and in vitro Callus

Authors: Aleya Ferdausi, Meriel Jones, Anthony Halls

Abstract:

The Amaryllidaceae genus Narcissus contains secondary metabolites, which are important sources of bioactive compounds such as pharmaceuticals indicating that their biological activity extends from the native plant to humans. Transcriptome analysis (RNA-seq) is an effective platform for the identification and functional characterization of candidate genes as well as to identify genes encoding uncharacterized enzymes. The biotechnological production of secondary metabolites in plant cell or organ cultures has become a tempting alternative to the extraction of whole plant material. The biochemical pathways for the production of secondary metabolites require primary metabolites to undergo a series of modifications catalyzed by enzymes such as cytochrome P450s, methyltransferases, glycosyltransferases, and acyltransferases. Differential gene expression analysis of Narcissus was obtained from two conditions, i.e. field and in vitro callus. Callus was obtained from modified MS (Murashige and Skoog) media supplemented with growth regulators and twin-scale explants from Narcissus cv. Carlton bulb. A total of 2153 differentially expressed transcripts were detected in Narcissus bulb and in vitro callus, and 78.95% of those were annotated. It showed the expression of genes involved in the biosynthesis of alkaloids were present in both conditions i.e. cytochrome P450s, O-methyltransferase (OMTs), NADP/NADPH dehydrogenases or reductases, SAM-synthetases or decarboxylases, 3-ketoacyl-CoA, acyl-CoA, cinnamoyl-CoA, cinnamate 4-hydroxylase, alcohol dehydrogenase, caffeic acid, N-methyltransferase, and NADPH-cytochrome P450s. However, cytochrome P450s and OMTs involved in the later stage of Amaryllidaceae alkaloids biosynthesis were mainly up-regulated in field samples. Whereas, the enzymes involved in initial biosynthetic pathways i.e. fructose biphosphate adolase, aminotransferases, dehydrogenases, hydroxyl methyl glutarate and glutamate synthase leading to the biosynthesis of precursors; tyrosine, phenylalanine and tryptophan for secondary metabolites were up-regulated in callus. The knowledge of probable genes involved in secondary metabolism and their regulation in different tissues will provide insight into the Narcissus plant biology related to alkaloid production.

Keywords: narcissus, callus, transcriptomics, secondary metabolites

Procedia PDF Downloads 132
766 Home Made Rice Beer Waste (Choak): A Low Cost Feed for Sustainable Poultry Production

Authors: Vinay Singh, Chandra Deo, Asit Chakrabarti, Lopamudra Sahoo, Mahak Singh, Rakesh Kumar, Dinesh Kumar, H. Bharati, Biswajit Das, V. K. Mishra

Abstract:

The most widely used feed resources in poultry feed, like maize and soybean, are expensive as well as in short supply. Hence, there is a need to utilize non-conventional feed ingredients to cut down feed costs. As an alternative, brewery by-products like brewers’ dried grains are potential non-conventional feed resources. North-East India is inhabited by many tribes, and most of these tribes prepare their indigenous local brew, mostly using rice grains as the primary substrate. Choak, a homemade rice beer waste, is an excellent and cheap source of protein and other nutrients. Fresh homemade rice beer waste (rice brewer’s grain) was collected locally. The proximate analysis indicated 28.53% crude protein, 92.76% dry matter, 5.02% ether extract, 7.83% crude fibre, 2.85% total ash, 0.67% acid insoluble ash, 0.91% calcium, and 0.55% total phosphorus. A feeding trial with 5 treatments (incorporating rice beer waste at the inclusion levels of 0,10,20,30 & 40% by replacing maize and soybean from basal diet) was conducted with 25 laying hens per treatment for 16 weeks under completely randomized design in order to study the production performance, blood-biochemical parameters, immunity, egg quality and cost economics of laying hens. The results showed substantial variations (P<0.01) in egg production, egg mass, FCR per dozen eggs, FCR per kg egg mass, and net FCR. However, there was not a substantial difference in either body weight or feed intake or in egg weight. Total serum cholesterol reduced significantly (P<0.01) at 40% inclusion of rice beer waste. Additionally, the egg haugh unit grew considerably (P<0.01) when the graded levels of rice beer waste increased. The inclusion of 20% rice brewers dried grain reduced feed cost per kg egg mass and per dozen egg production by Rs. 15.97 and 9.99, respectively. Choak (homemade rice beer waste) can thus be safely incorporated into the diet of laying hens at a 20% inclusion level for better production performance and cost-effectiveness.

Keywords: choak, rice beer waste, laying hen, production performance, cost economics

Procedia PDF Downloads 43
765 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 93
764 A Review Investigating the Potential Of Zooxanthellae to Be Genetically Engineered to Combat Coral Bleaching

Authors: Anuschka Curran, Sandra Barnard

Abstract:

Coral reefs are of the most diverse and productive ecosystems on the planet, but due to the impact of climate change, these infrastructures are dying off primarily through coral bleaching. Coral bleaching can be described as the process by which zooxanthellae (algal endosymbionts) are expelled from the gastrodermal cavity of the respective coral host, causing increased coral whitening. The general consensus is that mass coral bleaching is due to the dysfunction of photosynthetic processes in the zooxanthellae as a result of the combined action of elevated temperature and light-stress. The question then is, do zooxanthellae have the potential to play a key role in the future of coral reef restoration through genetic engineering? The aim of this study is firstly to review the different zooxanthellae taxa and their traits with respect to environmental stress, and secondly, to review the information available on the protective mechanisms present in zooxanthellae cells when experiencing temperature fluctuations, specifically concentrating on heat shock proteins and the antioxidant stress response of zooxanthellae. The eight clades (A-H) previously recognized were redefined into seven genera. Different zooxanthellae taxa exhibit different traits, such as their photosynthetic stress responses to light and temperature. Zooxanthellae have the ability to determine the amount and type of heat shock proteins (hsps) present during a heat response. The zooxanthellae can regulate both the host’s respective hsps as well as their own. Hsps, generally found in genotype C3 zooxanthellae, such as Hsp70 and Hsp90, contribute to the thermal stress response of the respective coral host. Antioxidant activity found both within exposed coral tissue, and the zooxanthellae cells can prevent coral hosts from expelling their endosymbionts. The up-regulation of gene expression, which may mitigate thermal stress induction of any of the physiological aspects discussed, can ensure stable coral-zooxanthellae symbiosis in the future. It presents a viable alternative strategy to preserve reefs amidst climate change. In conclusion, despite their unusual molecular design, genetic engineering poses as a useful tool in understanding and manipulating variables and systems within zooxanthellae and therefore presents a solution that can ensure stable coral-zooxanthellae symbiosis in the future.

Keywords: antioxidant enzymes, genetic engineering, heat-shock proteins, Symbiodinium

Procedia PDF Downloads 173
763 Utilization of Silk Waste as Fishmeal Replacement: Growth Performance of Cyprinus carpio Juveniles Fed with Bombyx mori Pupae

Authors: Goksen Capar, Levent Dogankaya

Abstract:

According to the circular economy model, resource productivity should be maximized and wastes should be reduced. Since earth’s natural resources are continuously depleted, resource recovery has gained great interest in recent years. As part of our research study on the recovery and reuse of silk wastes, this paper focuses on the utilization of silkworm pupae as fishmeal replacement, which would replace the original fishmeal raw material, namely the fish itself. This, in turn, would contribute to sustainable management of wild fish resources. Silk fibre is secreted by the silkworm Bombyx mori in order to construct a 'room' for itself during its transformation process from pupae to an adult moth. When the cocoons are boiled in hot water, silk fibre becomes loose and the silk yarn is produced by combining thin silk fibres. The remaining wastes are 1) sericin protein, which is dissolved in water, 2) remaining part of cocoon, including the dead body of B. mori pupae. In this study, an eight weeks trial was carried out to determine the growth performance of common carp juveniles fed with waste silkworm pupae meal (SWPM) as a replacement for fishmeal (FM). Four isonitrogenous diets (40% CP) were prepared replacing 0%, 33%, 50%, and 100% of the dietary FM with non-defatted silkworm pupae meal as a dietary protein source for experiments in C. carpio. Triplicate groups comprising of 20 fish (0.92±0.29 g) were fed twice/day with one of the four diets. Over a period of 8 weeks, results showed that the diet containing 50% of its protein from SWPM had significantly higher (p ≤ 0.05) growth rates in all groups. The increasing levels of SWPM were resulted in a decrease in growth performance and significantly lower growth (p ≤ 0.05) was observed with diets having 100% SWPM. The study demonstrates that it is practical to replace 50% of the FM protein with SWPM with a significantly better utilization of the diet but higher SWPM levels are not recommended for juvenile carp. Further experiments are under study to have more detailed results on the possible effects of this alternative diet on the growth performance of juvenile carp.

Keywords: Bombyx mori, Cyprinus carpio, fish meal, silk, waste pupae

Procedia PDF Downloads 137
762 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems

Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto

Abstract:

In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.

Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors

Procedia PDF Downloads 289
761 A Rare Case of Dissection of Cervical Portion of Internal Carotid Artery, Diagnosed Postpartum

Authors: Bidisha Chatterjee, Sonal Grover, Rekha Gurung

Abstract:

Postpartum dissection of the internal carotid artery is a relatively rare condition and is considered as an underlying aetiology in 5% to 25% of strokes under the age of 30 to 45 years. However, 86% of these cases recover completely and 14% have mild focal neurological symptoms. Prognosis is generally good with early intervention. The risk quoted for a repeat carotid artery dissection in subsequent pregnancies is less than 2%. 36-year Caucasian primipara presented on postnatal day one of forceps delivery with tachycardia. In the intrapartum period she had a history of prolonged rupture of membranes and developed intrapartum sepsis and was treated with antibiotics. Postpartum ECG showed septal inferior T wave inversion and a troponin level of 19. Subsequently Echocardiogram ruled out post-partum cardiomyopathy. Repeat ECG showed improvement of the previous changes and in the absence of symptoms no intervention was warranted. On day 4 post-delivery, she had developed symptoms of droopy right eyelid, pain around the right eye and itching in the right ear. On examination, she had developed right sided ptosis, unequal pupils (Rt miotic pupil). Cranial nerve examination, reflexes, sensory examination and muscle power was normal. Apart from migraine, there was no medical or family history of note. In view of Horner’s on the right, she had a CT Angiogram and subsequently MR/MRA and was diagnosed with dissection of the cervical portion of the right internal carotid artery. She was discharged on a course of Aspirin 75mg. By 6 week post-natal follow up patient had recovered significantly with occasional episodes of unequal pupils and tingling of right toes which resolved spontaneously. Cervical artery dissection, including VAD and carotid artery dissection, are rare complications of pregnancy with an estimated annual incidence of 2.6–3 per 100,000 pregnancy hospitalizations. Aetiology remains unclear though trauma during straining at labour, underlying arterial disease and preeclampsia have been implicated. Hypercoagulable state during pregnancy and puerperium could also be an important factor. 60-90% cases present with severe headache and neck pain and generally precede neurological symptoms like ipsilateral Horner’s syndrome, retroorbital pain, tinnitus and cranial nerve palsy. Although rare, the consequences of delayed diagnosis and management can lead to severe and permanent neurological deficits. Patients with a strong index of suspicion should undergo an MRI or MRA of head and neck. Antithrombotic and antiplatelet therapy forms the mainstay of therapy with selected cases needing endovascular stenting. Long term prognosis is favourable with either complete resolution or minimal deficit if treatment is prompt. Patients should be counselled about the recurrence risk and possibility of stroke in future pregnancy. Coronary artery dissection is rare and treatable but needs early diagnosis and treatment. Post-partum headache and neck pain with neurological symptoms should prompt urgent imaging followed by antithrombotic and /or antiplatelet therapy. Most cases resolve completely or with minimal sequelae.

Keywords: postpartum, dissection of internal carotid artery, magnetic resonance angiogram, magnetic resonance imaging, antiplatelet, antithrombotic

Procedia PDF Downloads 83
760 Expectations of Unvaccinated Health Workers in Greece and the Question of Trust: A Qualitative Study of Vaccine Hesitancy

Authors: Sideri Katerina, Chanania Eleni

Abstract:

The reasons why people remain unvaccinated, especially health workers, are complex. In Greece, 2 percent of health workers (around 7,000) remain unvaccinated, despite the fact that for this group of people vaccination against COVID-19 is mandatory. In April 2022, the Greek health minister repeated that unvaccinated health care workers will remain suspended from their jobs ‘for as long as the pandemic lasts,’ explaining that the suspension of the workers in question was ‘entirely their choice’ and that health professionals who do not believe in vaccines ‘do not believe in their own science.’ Although policy circles around the world often link vaccine hesitancy to ignorance of science or misinformation, various recently published qualitative studies show that vaccine hesitancy is the result of a combination of factors, which include distrust towards elites and the system of innovation and distrust towards government. In a similar spirit, some commentators warn that labeling hesitancy as “anti-science” is bad politics. In this paper, we worked within the tradition of STS taking the view that people draw upon personal associations to enact and express civic concern with an issue, the enactment of public concern involves the articulation of threats to actors’ way of life, personal values, relationships, lived experiences, broader societal values and institutional structures. To this effect, we have conducted 27 in depth interviews with unvaccinated Greek health workers and we are in the process of conducting 20 more interviews. We have so far found that rather than a question of believing in ‘facts’ vaccine hesitancy reflects deep distrust towards those charged with the making of decisions and pharmaceutical companies and that emotions (rather than rational thinking) play a crucial role in the formation of attitudes and the making of decisions. We need to dig deeper so as to understand the causes of distrust towards technical government and the ways in which public(s) conceive of and want to be part in the politics of innovation. We particularly address the question of the effectiveness of mandatory vaccination of health workers and whether such top-down regulatory measures further polarize society, to finally discuss alternative regulatory approaches and governance structures.

Keywords: vaccine hesitancy, innovation, trust in vaccines, sociology of vaccines, attitude drivers towards scientific information, governance

Procedia PDF Downloads 59
759 Solar-Thermal-Electric Stirling Engine-Powered System for Residential Units

Authors: Florian Misoc, Cyril Okhio, Joshua Tolbert, Nick Carlin, Thomas Ramey

Abstract:

This project is focused on designing a Stirling engine system for a solar-thermal-electrical system that can supply electric power to a single residential unit. Since Stirling engines are heat engines operating any available heat source, is notable for its ability to generate clean and reliable energy without emissions. Due to the need of finding alternative energy sources, the Stirling engines are making a comeback with the recent technologies, which include thermal energy conservation during the heat transfer process. Recent reviews show mounting evidence and positive test results that Stirling engines are able to produce constant energy supply that ranges from 5kW to 20kW. Solar Power source is one of the many uses for Stirling engines. Using solar energy to operate Stirling engines is an idea considered by many researchers, due to the ease of adaptability of the Stirling engine. In this project, the Stirling engine developed was designed and tested to operate from biomass source of energy, i.e., wood pellets stove, during low solar radiation, with good results. A 20% efficiency of the engine was estimated, and 18% efficiency was measured, making it suitable and appropriate for residential applications. The effort reported was aimed at exploring parameters necessary to design, build and test a ‘Solar Powered Stirling Engine (SPSE)’ using Water (H₂O) as the Heat Transfer medium, with Nitrogen as the working gas that can reach or exceed an efficiency of 20%. The main objectives of this work consisted in: converting a V-twin cylinder air compressor into an alpha-type Stirling engine, construct a Solar Water Heater, by using an automotive radiator as the high-temperature reservoir for the Stirling engine, and an array of fixed mirrors that concentrate the solar radiation on the automotive radiator/high-temperature reservoir. The low-temperature reservoir is the surrounding air at ambient temperature. This work has determined that a low-cost system is sufficiently efficient and reliable. Off-the-shelf components have been used and estimates of the ability of the Engine final design to meet the electricity needs of small residence have been determined.

Keywords: stirling engine, solar-thermal, power inverter, alternator

Procedia PDF Downloads 259
758 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 196
757 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues

Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos

Abstract:

Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.

Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints

Procedia PDF Downloads 77
756 Developing Social Responsibility Values in Nascent Entrepreneurs through Role-Play: An Explorative Study of University Students in the United Kingdom

Authors: David W. Taylor, Fernando Lourenço, Carolyn Branston, Paul Tucker

Abstract:

There are an increasing number of students at Universities in the United Kingdom engaging in entrepreneurship role-play to explore business start-up as a career alternative to employment. These role-play activities have been shown to have a positive influence on students’ entrepreneurial intentions. Universities also play a role in developing graduates’ awareness of social responsibility. However, social responsibility is often missing from these entrepreneurship role-plays. It is important that these role-play activities include the development of values that support social responsibility, in-line with those running hybrid, humane and sustainable enterprises, and not simply focus on profit. The Young Enterprise (YE) Start-Up programme is an example of a role-play activity that is gaining in popularity amongst United Kingdom Universities seeking ways to give students insight into a business start-up. A Post-92 University in the North-West of England has adapted the traditional YE Directorship roles (e.g., Marketing Director, Sales Director) by including a Corporate Social Responsibility (CSR) Director in all of the team-based YE Start-Up businesses. The aim for introducing this Directorship was to observe if such a role would help create a more socially responsible value-system within each company and in turn shape business decisions. This paper investigates role-play as a tool to help enterprise educators develop socially responsible attitudes and values in nascent entrepreneurs. A mixed qualitative methodology approach has been used, which includes interviews, role-play, and reflection, to help students develop positive value characteristics through the exploration of unethical and selfish behaviors. The initial findings indicate that role-play helped CSR Directors learn and gain insights into the importance of corporate social responsibility, influenced the values and actions of their YE Start-Ups, and increased the likelihood that if the participants were to launch a business post-graduation, that the intent would be for the business to be socially responsible. These findings help inform educators on how to develop socially responsible nascent entrepreneurs within a traditionally profit orientated business model.

Keywords: student entrepreneurship, young enterprise, social responsibility, role-play, values

Procedia PDF Downloads 134
755 Mechanical, Thermal and Biodegradable Properties of Bioplast-Spruce Green Wood Polymer Composites

Authors: A. Atli, K. Candelier, J. Alteyrac

Abstract:

Environmental and sustainability concerns push the industries to manufacture alternative materials having less environmental impact. The Wood Plastic Composites (WPCs) produced by blending the biopolymers and natural fillers permit not only to tailor the desired properties of materials but also are the solution to meet the environmental and sustainability requirements. This work presents the elaboration and characterization of the fully green WPCs prepared by blending a biopolymer, BIOPLAST® GS 2189 and spruce sawdust used as filler with different amounts. Since both components are bio-based, the resulting material is entirely environmentally friendly. The mechanical, thermal, structural properties of these WPCs were characterized by different analytical methods like tensile, flexural and impact tests, Thermogravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC) and X-ray Diffraction (XRD). Their water absorption properties and resistance to the termite and fungal attacks were determined in relation with different wood filler content. The tensile and flexural moduli of WPCs increased with increasing amount of wood fillers into the biopolymer, but WPCs became more brittle compared to the neat polymer. Incorporation of spruce sawdust modified the thermal properties of polymer: The degradation, cold crystallization, and melting temperatures shifted to higher temperatures when spruce sawdust was added into polymer. The termite, fungal and water absorption resistance of WPCs decreased with increasing wood amount in WPCs, but remained in durability class 1 (durable) concerning fungal resistance and quoted 1 (attempted attack) in visual rating regarding to the termites resistance except that the WPC with the highest wood content (30 wt%) rated 2 (slight attack) indicating a long term durability. All the results showed the possibility to elaborate the easy injectable composite materials with adjustable properties by incorporation of BIOPLAST® GS 2189 and spruce sawdust. Therefore, lightweight WPCs allow both to recycle wood industry byproducts and to produce a full ecologic material.

Keywords: biodegradability, color measurements, durability, mechanical properties, melt flow index, MFI, structural properties, thermal properties, wood-plastic composites, WPCs

Procedia PDF Downloads 126
754 Biochar from Empty Fruit Bunches Generated in the Palm Oil Extraction and Its Nutrients Contribution in Cultivated Soils with Elaeis guineensis in Casanare, Colombia

Authors: Alvarado M. Lady G., Ortiz V. Yaylenne, Quintero B. Quelbis R.

Abstract:

The oil palm sector has seen significant growth in Colombia after the insertion of policies to stimulate the use of biofuels, which eventually contributes to the reduction of greenhouse gases (GHG) that deteriorate not only the environment but the health of people. However, the policy of using biofuels has been strongly questioned by the impacts that can generate; an example is the increase of other more harmful GHGs like the CH₄ that underlies the amount of solid waste generated. Casanare's department is estimated be one of the major producers of palm oil of the country given that has recently expanded its sowed area, which implies an increase in waste generated primarily in the industrial stage. For this reason, the following study evaluated the agronomic potential of the biochar obtained from empty fruit bunches and its nutritional contribution in cultivated soils with Elaeis guineensis in Casanare, Colombia. The biochar was obtained by slow pyrolysis of the clusters in a retort oven at an average temperature of 190 °C and a residence time of 8 hours. The final product was taken to the laboratory for its physical and chemical analysis as well as a soil sample from a cultivation of Elaeis guineensis located in Tauramena-Casanare. With the results obtained plus the bibliographical reports of the nutrient demand in this cultivation, the possible nutritional contribution of the biochar was determined. It is estimated that the cultivation requirements of nitrogen is 12.1 kg.ha⁻¹, potassium is 59.3 kg.ha⁻¹, magnesium is -31.5 kg.ha⁻¹ and phosphorus is 5.6 kg.ha⁻¹ obtaining a biochar contribution of 143.1 kg.ha⁻¹, 1204.5 kg.ha⁻¹, 39.2 kg.ha⁻¹ and 71.6 kg.ha⁻¹ respectively. The incorporation of biochar into the soil would significantly improve the concentrations of N, P, K and Mg, nutrients considered important in the yield of palm oil, coupled with the importance of nutrient recycling in agricultural production systems sustainable. The biochar application improves the physical properties of soils, mainly in the humidity retention. On the other hand, it regulates the availability of nutrients for plants absorption, with economic savings in the application of synthetic fertilizers and water by irrigation. It also becomes an alternative to manage agricultural waste, reducing the involuntary emissions of greenhouse gases to the environment by decomposition in the field, reducing the CO₂ content in the atmosphere.

Keywords: biochar, nutrient recycling, oil palm, pyrolysis

Procedia PDF Downloads 145
753 Factors Associated with Commencement of Non-Invasive Ventilation

Authors: Manoj Kumar Reddy Pulim, Lakshmi Muthukrishnan, Geetha Jayapathy, Radhika Raman

Abstract:

Introduction: In the past two decades, noninvasive positive pressure ventilation (NIPPV) emerged as one of the most important advances in the management of both acute and chronic respiratory failure in children. In the acute setting, it is an alternative to intubation with a goal to preserve normal physiologic functions, decrease airway injury, and prevent respiratory tract infections. There is a need to determine the clinical profile and parameters which point towards the need for NIV in the pediatric emergency setting. Objectives: i) To study the clinical profile of children who required non invasive ventilation and invasive ventilation, ii) To study the clinical parameters common to children who required non invasive ventilation. Methods: All children between one month to 18 years, who were intubated in the pediatric emergency department and those for whom decision to commence Non Invasive Ventilation was made in Emergency Room were included in the study. Children were transferred to the Paediatric Intensive Care Unit and started on Non Invasive Ventilation as per our hospital policy and followed up in the Paediatric Intensive Care Unit. Clinical profile of all children which included age, gender, diagnosis and indication for intubation were documented. Clinical parameters such as respiratory rate, heart rate, saturation, grunting were documented. Parameters obtained were subject to statistical analysis. Observations: Airway disease (Bronchiolitis 25%, Viral induced wheeze 22%) was a common diagnosis in 32 children who required Non Invasive Ventilation. Neuromuscular disorder was the common diagnosis in 27 children (78%) who were Intubated. 17 children commenced on Non Invasive Ventilation who later needed invasive ventilation had Neuromuscular disease. High frequency nasal cannula was used in 32, and mask ventilation in 17 children. Clinical parameters common to the Non Invasive Ventilation group were age < 1 year (17), tachycardia n = 7 (22%), tachypnea n = 23 (72%) and severe respiratory distress n = 9 (28%), grunt n = 7 (22%), SPO2 (80% to 90%) n = 16. Children in the Non Invasive Ventilation + INTUBATION group were > 3 years (9), had tachycardia 7 (41%), tachypnea 9(53%) with a male predominance n = 9. In statistical comparison among 3 groups,'p' value was significant for pH, saturation, and use of Ionotrope. Conclusion: Invasive ventilation can be avoided in the paediatric Emergency Department in children with airway disease, by commencing Non Invasive Ventilation early. Intubation in the pediatric emergency department has a higher association with neuromuscular disorders.

Keywords: clinical parameters, indications, non invasive ventilation, paediatric emergency room

Procedia PDF Downloads 314
752 Lock in, Lock Out: A Double Lens Analysis of Local Media Paywall Strategies and User Response

Authors: Mona Solvoll, Ragnhild Kr. Olsen

Abstract:

Background and significance of the study: Newspapers are going through radical changes with increased competition, eroding readerships and declining advertising resulting in plummeting overall revenues. This has lead to a quest for new business models, focusing on monetizing content. This research paper investigates both how local online newspapers have introduced user payment and how the audience has received these changes. Given the role of local media in keeping their communities informed and those in power accountable, their potential impact on civic engagement and cultural integration in local communities, the business model innovations of local media deserves far more research interest. Empirically, the findings are interesting for local journalists, local media managers as well as local advertisers. Basic methodologies: The study is based on interviews with commercial leaders in 20 Norwegian local newspapers in addition to a national survey data from 1600 respondents among local media users. The interviews were conducted in the second half of 2015, while the survey was conducted in September 2016. Theoretically, the study draws on the business model framework. Findings: The analysis indicates that paywalls aim more at reducing digital cannibalisation of print revenue than about creating new digital income. The newspapers are mostly concerned with retaining “old” print subscribers and transform them into digital subscribers. However, this strategy may come at a high price for newspapers if their defensive print strategy drives away younger digital readership and hamper their recruitment potential for new audiences as some previous studies have indicated. Analysis of young reader news habits indicates that attracting the younger audience to traditional local news providers is particularly challenging and that they are more prone to seek alternative news sources than the older audience is. Conclusion: The paywall strategy applied by the local newspapers may be well fitted to stabilise print subscription figures and facilitate more tailored and better services for already existing customers, but far less suited for attracting new ones. The paywall is a short-sighted strategy, which drives away younger readers and paves the road for substitute offerings, particularly Facebook.

Keywords: business model, newspapers, paywall, user payment

Procedia PDF Downloads 253
751 Analysis of the Introduction of Carsharing in the Context of Developing Countries: A Case Study Based on On-Board Carsharing Survey in Kabul, Afghanistan

Authors: Mustafa Rezazada, Takuya Maruyama

Abstract:

Cars have a strong integration with the human being since its introduction, and this interaction is more evident in the urban context. Therefore, shifting city residents from driving private vehicles to public transits has been a big challenge. Accordingly, carsharing as an innovative, environmentally friendly transport alternative had a significant contribution to this transition so far. It helped to reduce the numbers of household car ownership, declining demand for on-street parking, dropping the numbers of kilometers traveled by car, and affects the future of mobility by decreasing the Green House Gases (GHS) emissions’ and the numbers of new cars to be purchased otherwise. However, majorities of carsharing researches were conducted in highly developed cities, and less attention has been paid to the cities of developing countries. This study is conducted in the Capital of Afghanistan, Kabul to investigate the current transport pattern, user behavior, and to examine the possibility of introducing the carsharing system. This study established a new survey method called Onboard Carsharing Survey OCS. In this survey, the carpooling passengers aboard are interviewed following the Onboard Transit Survey OTS guideline with a few refinements. The survey focuses on respondents’ daily travel behavior and hypothetical stated choice of carsharing opportunities. Moreover, it followed by an aggregate analysis at the end. The survey results indicate the following: two-thirds of the respondents 62% have been carpooling every day since 5 years or more, more than half of the respondents are not satisfied with current modes, besides other attributes the Traffic Congestion, Environment and Insufficient Public Transport were ranked the most critical in daily transportation by survey participants. Moreover, 68.24% of the respondent chose Carsharing over carpooling under different choice game scenarios. Overall, the findings in this research show that Kabul City is a potential underground for the introduction of Carsharing in the future. Taken together, insufficient public transit, dissatisfaction with current modes, and their stated interest will affect the future of carsharing positively in Kabul City. The modal choice in this study is limited to carpooling and carsharing; more choice sets, including bus, cycling, and walking, will have to be added to evaluate further.

Keywords: carsharing, developing countries, Kabul Afghanistan, onboard carsharing survey, transportation, urban planning

Procedia PDF Downloads 118
750 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 309