Search results for: Alternative fuel
786 DEMs: A Multivariate Comparison Approach
Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo
Abstract:
The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variablesKeywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison
Procedia PDF Downloads 115785 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.Keywords: acceptance, coping strategies, stress, validation process
Procedia PDF Downloads 339784 The Regulation of Reputational Information in the Sharing Economy
Authors: Emre Bayamlıoğlu
Abstract:
This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy
Procedia PDF Downloads 465783 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 72782 Impact of Land-Use and Climate Change on the Population Structure and Distribution Range of the Rare and Endangered Dracaena ombet and Dobera glabra in Northern Ethiopia
Authors: Emiru Birhane, Tesfay Gidey, Haftu Abrha, Abrha Brhan, Amanuel Zenebe, Girmay Gebresamuel, Florent Noulèkoun
Abstract:
Dracaena ombet and Dobera glabra are two of the most rare and endangered tree species in dryland areas. Unfortunately, their sustainability is being compromised by different anthropogenic and natural factors. However, the impacts of ongoing land use and climate change on the population structure and distribution of the species are less explored. This study was carried out in the grazing lands and hillside areas of the Desa'a dry Afromontane forest, northern Ethiopia, to characterize the population structure of the species and predict the impact of climate change on their potential distributions. In each land-use type, abundance, diameter at breast height, and height of the trees were collected using 70 sampling plots distributed over seven transects spaced one km apart. The geographic coordinates of each individual tree were also recorded. The results showed that the species populations were characterized by low abundance and unstable population structure. The latter was evinced by a lack of seedlings and mature trees. The study also revealed that the total abundance and dendrometric traits of the trees were significantly different between the two land uses. The hillside areas had a denser abundance of bigger and taller trees than the grazing lands. Climate change predictions using the MaxEnt model highlighted that future temperature increases coupled with reduced precipitation would lead to significant reductions in the suitable habitats of the species in northern Ethiopia. The species' suitable habitats were predicted to decline by 48–83% for D. ombet and 35–87% for D. glabra. Hence, to sustain the species populations, different strategies should be adopted, namely the introduction of alternative livelihoods (e.g., gathering NTFP) to reduce the overexploitation of the species for subsistence income and the protection of the current habitats that will remain suitable in the future using community-based exclosures. Additionally, the preservation of the species' seeds in gene banks is crucial to ensure their long-term conservation.Keywords: grazing lands, hillside areas, land-use change, MaxEnt, range limitation, rare and endangered tree species
Procedia PDF Downloads 96781 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit
Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira
Abstract:
Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing
Procedia PDF Downloads 143780 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 166779 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 135778 Transcriptomic Analysis for Differential Expression of Genes Involved in Secondary Metabolite Production in Narcissus Bulb and in vitro Callus
Authors: Aleya Ferdausi, Meriel Jones, Anthony Halls
Abstract:
The Amaryllidaceae genus Narcissus contains secondary metabolites, which are important sources of bioactive compounds such as pharmaceuticals indicating that their biological activity extends from the native plant to humans. Transcriptome analysis (RNA-seq) is an effective platform for the identification and functional characterization of candidate genes as well as to identify genes encoding uncharacterized enzymes. The biotechnological production of secondary metabolites in plant cell or organ cultures has become a tempting alternative to the extraction of whole plant material. The biochemical pathways for the production of secondary metabolites require primary metabolites to undergo a series of modifications catalyzed by enzymes such as cytochrome P450s, methyltransferases, glycosyltransferases, and acyltransferases. Differential gene expression analysis of Narcissus was obtained from two conditions, i.e. field and in vitro callus. Callus was obtained from modified MS (Murashige and Skoog) media supplemented with growth regulators and twin-scale explants from Narcissus cv. Carlton bulb. A total of 2153 differentially expressed transcripts were detected in Narcissus bulb and in vitro callus, and 78.95% of those were annotated. It showed the expression of genes involved in the biosynthesis of alkaloids were present in both conditions i.e. cytochrome P450s, O-methyltransferase (OMTs), NADP/NADPH dehydrogenases or reductases, SAM-synthetases or decarboxylases, 3-ketoacyl-CoA, acyl-CoA, cinnamoyl-CoA, cinnamate 4-hydroxylase, alcohol dehydrogenase, caffeic acid, N-methyltransferase, and NADPH-cytochrome P450s. However, cytochrome P450s and OMTs involved in the later stage of Amaryllidaceae alkaloids biosynthesis were mainly up-regulated in field samples. Whereas, the enzymes involved in initial biosynthetic pathways i.e. fructose biphosphate adolase, aminotransferases, dehydrogenases, hydroxyl methyl glutarate and glutamate synthase leading to the biosynthesis of precursors; tyrosine, phenylalanine and tryptophan for secondary metabolites were up-regulated in callus. The knowledge of probable genes involved in secondary metabolism and their regulation in different tissues will provide insight into the Narcissus plant biology related to alkaloid production.Keywords: narcissus, callus, transcriptomics, secondary metabolites
Procedia PDF Downloads 143777 Strategies for Drought Adpatation and Mitigation via Wastewater Management
Authors: Simrat Kaur, Fatema Diwan, Brad Reddersen
Abstract:
The unsustainable and injudicious use of natural renewable resources beyond the self-replenishment limits of our planet has proved catastrophic. Most of the Earth’s resources, including land, water, minerals, and biodiversity, have been overexploited. Owing to this, there is a steep rise in the global events of natural calamities of contrasting nature, such as torrential rains, storms, heat waves, rising sea levels, and megadroughts. These are all interconnected through common elements, namely oceanic currents and land’s the green cover. The deforestation fueled by the ‘economic elites’ or the global players have already cleared massive forests and ecological biomes in every region of the globe, including the Amazon. These were the natural carbon sinks prevailing and performing CO2 sequestration for millions of years. The forest biomes have been turned into mono cultivation farms to produce feedstock crops such as soybean, maize, and sugarcane; which are one of the biggest green house gas emitters. Such unsustainable agriculture practices only provide feedstock for livestock and food processing industries with huge carbon and water footprints. These are two main factors that have ‘cause and effect’ relationships in the context of climate change. In contrast to organic and sustainable farming, the mono-cultivation practices to produce food, fuel, and feedstock using chemicals devoid of the soil of its fertility, abstract surface, and ground waters beyond the limits of replenishment, emit green house gases, and destroy biodiversity. There are numerous cases across the planet where due to overuse; the levels of surface water reservoir such as the Lake Mead in Southwestern USA and ground water such as in Punjab, India, have deeply shrunk. Unlike the rain fed food production system on which the poor communities of the world relies; the blue water (surface and ground water) dependent mono-cropping for industrial and processed food create water deficit which put the burden on the domestic users. Excessive abstraction of both surface and ground waters for high water demanding feedstock (soybean, maize, sugarcane), cereal crops (wheat, rice), and cash crops (cotton) have a dual and synergistic impact on the global green house gas emissions and prevalence of megadroughts. Both these factors have elevated global temperatures, which caused cascading events such as soil water deficits, flash fires, and unprecedented burning of the woods, creating megafires in multiple continents, namely USA, South America, Europe, and Australia. Therefore, it is imperative to reduce the green and blue water footprints of agriculture and industrial sectors through recycling of black and gray waters. This paper explores various opportunities for successful implementation of wastewater management for drought preparedness in high risk communities.Keywords: wastewater, drought, biodiversity, water footprint, nutrient recovery, algae
Procedia PDF Downloads 100776 Home Made Rice Beer Waste (Choak): A Low Cost Feed for Sustainable Poultry Production
Authors: Vinay Singh, Chandra Deo, Asit Chakrabarti, Lopamudra Sahoo, Mahak Singh, Rakesh Kumar, Dinesh Kumar, H. Bharati, Biswajit Das, V. K. Mishra
Abstract:
The most widely used feed resources in poultry feed, like maize and soybean, are expensive as well as in short supply. Hence, there is a need to utilize non-conventional feed ingredients to cut down feed costs. As an alternative, brewery by-products like brewers’ dried grains are potential non-conventional feed resources. North-East India is inhabited by many tribes, and most of these tribes prepare their indigenous local brew, mostly using rice grains as the primary substrate. Choak, a homemade rice beer waste, is an excellent and cheap source of protein and other nutrients. Fresh homemade rice beer waste (rice brewer’s grain) was collected locally. The proximate analysis indicated 28.53% crude protein, 92.76% dry matter, 5.02% ether extract, 7.83% crude fibre, 2.85% total ash, 0.67% acid insoluble ash, 0.91% calcium, and 0.55% total phosphorus. A feeding trial with 5 treatments (incorporating rice beer waste at the inclusion levels of 0,10,20,30 & 40% by replacing maize and soybean from basal diet) was conducted with 25 laying hens per treatment for 16 weeks under completely randomized design in order to study the production performance, blood-biochemical parameters, immunity, egg quality and cost economics of laying hens. The results showed substantial variations (P<0.01) in egg production, egg mass, FCR per dozen eggs, FCR per kg egg mass, and net FCR. However, there was not a substantial difference in either body weight or feed intake or in egg weight. Total serum cholesterol reduced significantly (P<0.01) at 40% inclusion of rice beer waste. Additionally, the egg haugh unit grew considerably (P<0.01) when the graded levels of rice beer waste increased. The inclusion of 20% rice brewers dried grain reduced feed cost per kg egg mass and per dozen egg production by Rs. 15.97 and 9.99, respectively. Choak (homemade rice beer waste) can thus be safely incorporated into the diet of laying hens at a 20% inclusion level for better production performance and cost-effectiveness.Keywords: choak, rice beer waste, laying hen, production performance, cost economics
Procedia PDF Downloads 59775 A Review Investigating the Potential Of Zooxanthellae to Be Genetically Engineered to Combat Coral Bleaching
Authors: Anuschka Curran, Sandra Barnard
Abstract:
Coral reefs are of the most diverse and productive ecosystems on the planet, but due to the impact of climate change, these infrastructures are dying off primarily through coral bleaching. Coral bleaching can be described as the process by which zooxanthellae (algal endosymbionts) are expelled from the gastrodermal cavity of the respective coral host, causing increased coral whitening. The general consensus is that mass coral bleaching is due to the dysfunction of photosynthetic processes in the zooxanthellae as a result of the combined action of elevated temperature and light-stress. The question then is, do zooxanthellae have the potential to play a key role in the future of coral reef restoration through genetic engineering? The aim of this study is firstly to review the different zooxanthellae taxa and their traits with respect to environmental stress, and secondly, to review the information available on the protective mechanisms present in zooxanthellae cells when experiencing temperature fluctuations, specifically concentrating on heat shock proteins and the antioxidant stress response of zooxanthellae. The eight clades (A-H) previously recognized were redefined into seven genera. Different zooxanthellae taxa exhibit different traits, such as their photosynthetic stress responses to light and temperature. Zooxanthellae have the ability to determine the amount and type of heat shock proteins (hsps) present during a heat response. The zooxanthellae can regulate both the host’s respective hsps as well as their own. Hsps, generally found in genotype C3 zooxanthellae, such as Hsp70 and Hsp90, contribute to the thermal stress response of the respective coral host. Antioxidant activity found both within exposed coral tissue, and the zooxanthellae cells can prevent coral hosts from expelling their endosymbionts. The up-regulation of gene expression, which may mitigate thermal stress induction of any of the physiological aspects discussed, can ensure stable coral-zooxanthellae symbiosis in the future. It presents a viable alternative strategy to preserve reefs amidst climate change. In conclusion, despite their unusual molecular design, genetic engineering poses as a useful tool in understanding and manipulating variables and systems within zooxanthellae and therefore presents a solution that can ensure stable coral-zooxanthellae symbiosis in the future.Keywords: antioxidant enzymes, genetic engineering, heat-shock proteins, Symbiodinium
Procedia PDF Downloads 189774 Utilization of Silk Waste as Fishmeal Replacement: Growth Performance of Cyprinus carpio Juveniles Fed with Bombyx mori Pupae
Authors: Goksen Capar, Levent Dogankaya
Abstract:
According to the circular economy model, resource productivity should be maximized and wastes should be reduced. Since earth’s natural resources are continuously depleted, resource recovery has gained great interest in recent years. As part of our research study on the recovery and reuse of silk wastes, this paper focuses on the utilization of silkworm pupae as fishmeal replacement, which would replace the original fishmeal raw material, namely the fish itself. This, in turn, would contribute to sustainable management of wild fish resources. Silk fibre is secreted by the silkworm Bombyx mori in order to construct a 'room' for itself during its transformation process from pupae to an adult moth. When the cocoons are boiled in hot water, silk fibre becomes loose and the silk yarn is produced by combining thin silk fibres. The remaining wastes are 1) sericin protein, which is dissolved in water, 2) remaining part of cocoon, including the dead body of B. mori pupae. In this study, an eight weeks trial was carried out to determine the growth performance of common carp juveniles fed with waste silkworm pupae meal (SWPM) as a replacement for fishmeal (FM). Four isonitrogenous diets (40% CP) were prepared replacing 0%, 33%, 50%, and 100% of the dietary FM with non-defatted silkworm pupae meal as a dietary protein source for experiments in C. carpio. Triplicate groups comprising of 20 fish (0.92±0.29 g) were fed twice/day with one of the four diets. Over a period of 8 weeks, results showed that the diet containing 50% of its protein from SWPM had significantly higher (p ≤ 0.05) growth rates in all groups. The increasing levels of SWPM were resulted in a decrease in growth performance and significantly lower growth (p ≤ 0.05) was observed with diets having 100% SWPM. The study demonstrates that it is practical to replace 50% of the FM protein with SWPM with a significantly better utilization of the diet but higher SWPM levels are not recommended for juvenile carp. Further experiments are under study to have more detailed results on the possible effects of this alternative diet on the growth performance of juvenile carp.Keywords: Bombyx mori, Cyprinus carpio, fish meal, silk, waste pupae
Procedia PDF Downloads 158773 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems
Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto
Abstract:
In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors
Procedia PDF Downloads 302772 Expectations of Unvaccinated Health Workers in Greece and the Question of Trust: A Qualitative Study of Vaccine Hesitancy
Authors: Sideri Katerina, Chanania Eleni
Abstract:
The reasons why people remain unvaccinated, especially health workers, are complex. In Greece, 2 percent of health workers (around 7,000) remain unvaccinated, despite the fact that for this group of people vaccination against COVID-19 is mandatory. In April 2022, the Greek health minister repeated that unvaccinated health care workers will remain suspended from their jobs ‘for as long as the pandemic lasts,’ explaining that the suspension of the workers in question was ‘entirely their choice’ and that health professionals who do not believe in vaccines ‘do not believe in their own science.’ Although policy circles around the world often link vaccine hesitancy to ignorance of science or misinformation, various recently published qualitative studies show that vaccine hesitancy is the result of a combination of factors, which include distrust towards elites and the system of innovation and distrust towards government. In a similar spirit, some commentators warn that labeling hesitancy as “anti-science” is bad politics. In this paper, we worked within the tradition of STS taking the view that people draw upon personal associations to enact and express civic concern with an issue, the enactment of public concern involves the articulation of threats to actors’ way of life, personal values, relationships, lived experiences, broader societal values and institutional structures. To this effect, we have conducted 27 in depth interviews with unvaccinated Greek health workers and we are in the process of conducting 20 more interviews. We have so far found that rather than a question of believing in ‘facts’ vaccine hesitancy reflects deep distrust towards those charged with the making of decisions and pharmaceutical companies and that emotions (rather than rational thinking) play a crucial role in the formation of attitudes and the making of decisions. We need to dig deeper so as to understand the causes of distrust towards technical government and the ways in which public(s) conceive of and want to be part in the politics of innovation. We particularly address the question of the effectiveness of mandatory vaccination of health workers and whether such top-down regulatory measures further polarize society, to finally discuss alternative regulatory approaches and governance structures.Keywords: vaccine hesitancy, innovation, trust in vaccines, sociology of vaccines, attitude drivers towards scientific information, governance
Procedia PDF Downloads 74771 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.Keywords: bio-economy, investment risk, circular design, economic modelling
Procedia PDF Downloads 101770 Operational Characteristics of the Road Surface Improvement
Authors: Iuri Salukvadze
Abstract:
Construction takes importance role in the history of mankind, there is not a single thing-product in our lives in which the builder’s work was not to be materialized, because to create all of it requires setting up factories, roads, and bridges, etc. The function of the Republic of Georgia, as part of the connecting Europe-Asia transport corridor, is significantly increased. In the context of transit function a large part of the cargo traffic belongs to motor transport, hence the improvement of motor roads transport infrastructure is rather important and rise the new, increased operational demands for existing as well as new motor roads. Construction of the durable road surface is related to rather large values, but because of high transport-operational properties, such as high-speed, less fuel consumption, less depreciation of tires, etc. If the traffic intensity is high, therefore the reimbursement of expenses occurs rapidly and accordingly is increasing income. If the traffic intensity is relatively small, it is recommended to use lightened structures of road carpet in order to pay for capital investments amounted to no more than normative one. The road carpet is divided into the following basic types: asphaltic concrete and cement concrete. Asphaltic concrete is the most perfect type of road carpet. It is arranged in two or three layers on rigid foundation and will be compacted. Asphaltic concrete is artificial building material, which due stratum will be selected and measured from stone skeleton and sand, interconnected by bitumen and a mixture of mineral powder. Less strictly selected similar material is called as bitumen-mineral mixture. Asphaltic concrete is non-rigid building material and well durable on vertical loadings; it is less resistant to the impact of horizontal forces. The cement concrete is monolithic and durable material, it is well durable the horizontal loads and is less resistant related to vertical loads. The cement concrete consists from strictly selected, measured stone material and sand, the binder is cement. The cement concrete road carpet represents separate slabs of sizes from 3 ÷ 5 op to 6 ÷ 8 meters. The slabs are reinforced by a rather complex system. Between the slabs are arranged seams that are designed for avoiding of additional stresses due temperature fluctuations on the length of slabs. For the joint behavior of separate slabs, they are connected by metal rods. Rods provide the changes in the length of slabs and distribute to the slab vertical forces and bending moments. The foundation layers will be extremely durable, for that is required high-quality stone material, cement, and metal. The qualification work aims to: in order for improvement of traffic conditions on motor roads to prolong operational conditions and improving their characteristics. The work consists from three chapters, 80 pages, 5 tables and 5 figures. In the work are stated general concepts as well as carried out by various companies using modern methods tests and their results. In the chapter III are stated carried by us tests related to this issue and specific examples to improving the operational characteristics.Keywords: asphalt, cement, cylindrikal sample of asphalt, building
Procedia PDF Downloads 223769 Solar-Thermal-Electric Stirling Engine-Powered System for Residential Units
Authors: Florian Misoc, Cyril Okhio, Joshua Tolbert, Nick Carlin, Thomas Ramey
Abstract:
This project is focused on designing a Stirling engine system for a solar-thermal-electrical system that can supply electric power to a single residential unit. Since Stirling engines are heat engines operating any available heat source, is notable for its ability to generate clean and reliable energy without emissions. Due to the need of finding alternative energy sources, the Stirling engines are making a comeback with the recent technologies, which include thermal energy conservation during the heat transfer process. Recent reviews show mounting evidence and positive test results that Stirling engines are able to produce constant energy supply that ranges from 5kW to 20kW. Solar Power source is one of the many uses for Stirling engines. Using solar energy to operate Stirling engines is an idea considered by many researchers, due to the ease of adaptability of the Stirling engine. In this project, the Stirling engine developed was designed and tested to operate from biomass source of energy, i.e., wood pellets stove, during low solar radiation, with good results. A 20% efficiency of the engine was estimated, and 18% efficiency was measured, making it suitable and appropriate for residential applications. The effort reported was aimed at exploring parameters necessary to design, build and test a ‘Solar Powered Stirling Engine (SPSE)’ using Water (H₂O) as the Heat Transfer medium, with Nitrogen as the working gas that can reach or exceed an efficiency of 20%. The main objectives of this work consisted in: converting a V-twin cylinder air compressor into an alpha-type Stirling engine, construct a Solar Water Heater, by using an automotive radiator as the high-temperature reservoir for the Stirling engine, and an array of fixed mirrors that concentrate the solar radiation on the automotive radiator/high-temperature reservoir. The low-temperature reservoir is the surrounding air at ambient temperature. This work has determined that a low-cost system is sufficiently efficient and reliable. Off-the-shelf components have been used and estimates of the ability of the Engine final design to meet the electricity needs of small residence have been determined.Keywords: stirling engine, solar-thermal, power inverter, alternator
Procedia PDF Downloads 278768 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 208767 Developing Social Responsibility Values in Nascent Entrepreneurs through Role-Play: An Explorative Study of University Students in the United Kingdom
Authors: David W. Taylor, Fernando Lourenço, Carolyn Branston, Paul Tucker
Abstract:
There are an increasing number of students at Universities in the United Kingdom engaging in entrepreneurship role-play to explore business start-up as a career alternative to employment. These role-play activities have been shown to have a positive influence on students’ entrepreneurial intentions. Universities also play a role in developing graduates’ awareness of social responsibility. However, social responsibility is often missing from these entrepreneurship role-plays. It is important that these role-play activities include the development of values that support social responsibility, in-line with those running hybrid, humane and sustainable enterprises, and not simply focus on profit. The Young Enterprise (YE) Start-Up programme is an example of a role-play activity that is gaining in popularity amongst United Kingdom Universities seeking ways to give students insight into a business start-up. A Post-92 University in the North-West of England has adapted the traditional YE Directorship roles (e.g., Marketing Director, Sales Director) by including a Corporate Social Responsibility (CSR) Director in all of the team-based YE Start-Up businesses. The aim for introducing this Directorship was to observe if such a role would help create a more socially responsible value-system within each company and in turn shape business decisions. This paper investigates role-play as a tool to help enterprise educators develop socially responsible attitudes and values in nascent entrepreneurs. A mixed qualitative methodology approach has been used, which includes interviews, role-play, and reflection, to help students develop positive value characteristics through the exploration of unethical and selfish behaviors. The initial findings indicate that role-play helped CSR Directors learn and gain insights into the importance of corporate social responsibility, influenced the values and actions of their YE Start-Ups, and increased the likelihood that if the participants were to launch a business post-graduation, that the intent would be for the business to be socially responsible. These findings help inform educators on how to develop socially responsible nascent entrepreneurs within a traditionally profit orientated business model.Keywords: student entrepreneurship, young enterprise, social responsibility, role-play, values
Procedia PDF Downloads 151766 Mechanical, Thermal and Biodegradable Properties of Bioplast-Spruce Green Wood Polymer Composites
Authors: A. Atli, K. Candelier, J. Alteyrac
Abstract:
Environmental and sustainability concerns push the industries to manufacture alternative materials having less environmental impact. The Wood Plastic Composites (WPCs) produced by blending the biopolymers and natural fillers permit not only to tailor the desired properties of materials but also are the solution to meet the environmental and sustainability requirements. This work presents the elaboration and characterization of the fully green WPCs prepared by blending a biopolymer, BIOPLAST® GS 2189 and spruce sawdust used as filler with different amounts. Since both components are bio-based, the resulting material is entirely environmentally friendly. The mechanical, thermal, structural properties of these WPCs were characterized by different analytical methods like tensile, flexural and impact tests, Thermogravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC) and X-ray Diffraction (XRD). Their water absorption properties and resistance to the termite and fungal attacks were determined in relation with different wood filler content. The tensile and flexural moduli of WPCs increased with increasing amount of wood fillers into the biopolymer, but WPCs became more brittle compared to the neat polymer. Incorporation of spruce sawdust modified the thermal properties of polymer: The degradation, cold crystallization, and melting temperatures shifted to higher temperatures when spruce sawdust was added into polymer. The termite, fungal and water absorption resistance of WPCs decreased with increasing wood amount in WPCs, but remained in durability class 1 (durable) concerning fungal resistance and quoted 1 (attempted attack) in visual rating regarding to the termites resistance except that the WPC with the highest wood content (30 wt%) rated 2 (slight attack) indicating a long term durability. All the results showed the possibility to elaborate the easy injectable composite materials with adjustable properties by incorporation of BIOPLAST® GS 2189 and spruce sawdust. Therefore, lightweight WPCs allow both to recycle wood industry byproducts and to produce a full ecologic material.Keywords: biodegradability, color measurements, durability, mechanical properties, melt flow index, MFI, structural properties, thermal properties, wood-plastic composites, WPCs
Procedia PDF Downloads 137765 Biochar from Empty Fruit Bunches Generated in the Palm Oil Extraction and Its Nutrients Contribution in Cultivated Soils with Elaeis guineensis in Casanare, Colombia
Authors: Alvarado M. Lady G., Ortiz V. Yaylenne, Quintero B. Quelbis R.
Abstract:
The oil palm sector has seen significant growth in Colombia after the insertion of policies to stimulate the use of biofuels, which eventually contributes to the reduction of greenhouse gases (GHG) that deteriorate not only the environment but the health of people. However, the policy of using biofuels has been strongly questioned by the impacts that can generate; an example is the increase of other more harmful GHGs like the CH₄ that underlies the amount of solid waste generated. Casanare's department is estimated be one of the major producers of palm oil of the country given that has recently expanded its sowed area, which implies an increase in waste generated primarily in the industrial stage. For this reason, the following study evaluated the agronomic potential of the biochar obtained from empty fruit bunches and its nutritional contribution in cultivated soils with Elaeis guineensis in Casanare, Colombia. The biochar was obtained by slow pyrolysis of the clusters in a retort oven at an average temperature of 190 °C and a residence time of 8 hours. The final product was taken to the laboratory for its physical and chemical analysis as well as a soil sample from a cultivation of Elaeis guineensis located in Tauramena-Casanare. With the results obtained plus the bibliographical reports of the nutrient demand in this cultivation, the possible nutritional contribution of the biochar was determined. It is estimated that the cultivation requirements of nitrogen is 12.1 kg.ha⁻¹, potassium is 59.3 kg.ha⁻¹, magnesium is -31.5 kg.ha⁻¹ and phosphorus is 5.6 kg.ha⁻¹ obtaining a biochar contribution of 143.1 kg.ha⁻¹, 1204.5 kg.ha⁻¹, 39.2 kg.ha⁻¹ and 71.6 kg.ha⁻¹ respectively. The incorporation of biochar into the soil would significantly improve the concentrations of N, P, K and Mg, nutrients considered important in the yield of palm oil, coupled with the importance of nutrient recycling in agricultural production systems sustainable. The biochar application improves the physical properties of soils, mainly in the humidity retention. On the other hand, it regulates the availability of nutrients for plants absorption, with economic savings in the application of synthetic fertilizers and water by irrigation. It also becomes an alternative to manage agricultural waste, reducing the involuntary emissions of greenhouse gases to the environment by decomposition in the field, reducing the CO₂ content in the atmosphere.Keywords: biochar, nutrient recycling, oil palm, pyrolysis
Procedia PDF Downloads 157764 Factors Associated with Commencement of Non-Invasive Ventilation
Authors: Manoj Kumar Reddy Pulim, Lakshmi Muthukrishnan, Geetha Jayapathy, Radhika Raman
Abstract:
Introduction: In the past two decades, noninvasive positive pressure ventilation (NIPPV) emerged as one of the most important advances in the management of both acute and chronic respiratory failure in children. In the acute setting, it is an alternative to intubation with a goal to preserve normal physiologic functions, decrease airway injury, and prevent respiratory tract infections. There is a need to determine the clinical profile and parameters which point towards the need for NIV in the pediatric emergency setting. Objectives: i) To study the clinical profile of children who required non invasive ventilation and invasive ventilation, ii) To study the clinical parameters common to children who required non invasive ventilation. Methods: All children between one month to 18 years, who were intubated in the pediatric emergency department and those for whom decision to commence Non Invasive Ventilation was made in Emergency Room were included in the study. Children were transferred to the Paediatric Intensive Care Unit and started on Non Invasive Ventilation as per our hospital policy and followed up in the Paediatric Intensive Care Unit. Clinical profile of all children which included age, gender, diagnosis and indication for intubation were documented. Clinical parameters such as respiratory rate, heart rate, saturation, grunting were documented. Parameters obtained were subject to statistical analysis. Observations: Airway disease (Bronchiolitis 25%, Viral induced wheeze 22%) was a common diagnosis in 32 children who required Non Invasive Ventilation. Neuromuscular disorder was the common diagnosis in 27 children (78%) who were Intubated. 17 children commenced on Non Invasive Ventilation who later needed invasive ventilation had Neuromuscular disease. High frequency nasal cannula was used in 32, and mask ventilation in 17 children. Clinical parameters common to the Non Invasive Ventilation group were age < 1 year (17), tachycardia n = 7 (22%), tachypnea n = 23 (72%) and severe respiratory distress n = 9 (28%), grunt n = 7 (22%), SPO2 (80% to 90%) n = 16. Children in the Non Invasive Ventilation + INTUBATION group were > 3 years (9), had tachycardia 7 (41%), tachypnea 9(53%) with a male predominance n = 9. In statistical comparison among 3 groups,'p' value was significant for pH, saturation, and use of Ionotrope. Conclusion: Invasive ventilation can be avoided in the paediatric Emergency Department in children with airway disease, by commencing Non Invasive Ventilation early. Intubation in the pediatric emergency department has a higher association with neuromuscular disorders.Keywords: clinical parameters, indications, non invasive ventilation, paediatric emergency room
Procedia PDF Downloads 336763 The Use of Solar Energy for Cold Production
Authors: Nadia Allouache, Mohamed Belmedani
Abstract:
—It is imperative today to further explore alternatives to fossil fuels by promoting in particular renewable sources such as solar energy to produce cold. It is also important to carefully examine its current state as well as its future prospects in order to identify the best conditions to support its optimal development. Technologies linked to this alternative source fascinate their users because they seem magical in their ability to directly transform solar energy into cooling without resorting to polluting fuels such as those derived from hydrocarbons or other toxic substances. In addition, these not only allow significant savings in electricity, but can also help reduce the costs of electrical energy production when applied on a large scale. In this context, our study aims to analyze the performance of solar adsorption cooling systems by selecting the appropriate pair Adsorbent/Adsorbat. This paper presents a model describing the heat and mass transfer in tubular finned adsorber of solar adsorption refrigerating machine. The modelisation of the solar reactor take into account the heat and mass transfers phenomena. The reactor pressure is assumed to be uniform, the reactive reactor is characterized by an equivalent thermal conductivity and assumed to be at chemical and thermodynamic equilibrium. The numerical model is controlled by heat, mass and sorption equilibrium equations. Under the action of solar radiation, the mixture of adsorbent–adsorbate has a transitory behavior. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analyzed and discussed. The results show that, The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions. For the used working pairs, the increase of the fins number corresponds to the decreasing of the heat losses towards environmental and the increasing of heat transfer inside the adsorber. The system performances are sensitive to the evaporator and condenser temperatures. For the considered data measured for clear type days of may and july 2023 in Algeria and Tunisia, the performances of the cooling system are very significant in Algeria compared to Tunisia.Keywords: adsorption, adsorbent-adsorbate pair, finned reactor, numerical modeling, solar energy
Procedia PDF Downloads 18762 Lock in, Lock Out: A Double Lens Analysis of Local Media Paywall Strategies and User Response
Authors: Mona Solvoll, Ragnhild Kr. Olsen
Abstract:
Background and significance of the study: Newspapers are going through radical changes with increased competition, eroding readerships and declining advertising resulting in plummeting overall revenues. This has lead to a quest for new business models, focusing on monetizing content. This research paper investigates both how local online newspapers have introduced user payment and how the audience has received these changes. Given the role of local media in keeping their communities informed and those in power accountable, their potential impact on civic engagement and cultural integration in local communities, the business model innovations of local media deserves far more research interest. Empirically, the findings are interesting for local journalists, local media managers as well as local advertisers. Basic methodologies: The study is based on interviews with commercial leaders in 20 Norwegian local newspapers in addition to a national survey data from 1600 respondents among local media users. The interviews were conducted in the second half of 2015, while the survey was conducted in September 2016. Theoretically, the study draws on the business model framework. Findings: The analysis indicates that paywalls aim more at reducing digital cannibalisation of print revenue than about creating new digital income. The newspapers are mostly concerned with retaining “old” print subscribers and transform them into digital subscribers. However, this strategy may come at a high price for newspapers if their defensive print strategy drives away younger digital readership and hamper their recruitment potential for new audiences as some previous studies have indicated. Analysis of young reader news habits indicates that attracting the younger audience to traditional local news providers is particularly challenging and that they are more prone to seek alternative news sources than the older audience is. Conclusion: The paywall strategy applied by the local newspapers may be well fitted to stabilise print subscription figures and facilitate more tailored and better services for already existing customers, but far less suited for attracting new ones. The paywall is a short-sighted strategy, which drives away younger readers and paves the road for substitute offerings, particularly Facebook.Keywords: business model, newspapers, paywall, user payment
Procedia PDF Downloads 277761 Analysis of the Introduction of Carsharing in the Context of Developing Countries: A Case Study Based on On-Board Carsharing Survey in Kabul, Afghanistan
Authors: Mustafa Rezazada, Takuya Maruyama
Abstract:
Cars have a strong integration with the human being since its introduction, and this interaction is more evident in the urban context. Therefore, shifting city residents from driving private vehicles to public transits has been a big challenge. Accordingly, carsharing as an innovative, environmentally friendly transport alternative had a significant contribution to this transition so far. It helped to reduce the numbers of household car ownership, declining demand for on-street parking, dropping the numbers of kilometers traveled by car, and affects the future of mobility by decreasing the Green House Gases (GHS) emissions’ and the numbers of new cars to be purchased otherwise. However, majorities of carsharing researches were conducted in highly developed cities, and less attention has been paid to the cities of developing countries. This study is conducted in the Capital of Afghanistan, Kabul to investigate the current transport pattern, user behavior, and to examine the possibility of introducing the carsharing system. This study established a new survey method called Onboard Carsharing Survey OCS. In this survey, the carpooling passengers aboard are interviewed following the Onboard Transit Survey OTS guideline with a few refinements. The survey focuses on respondents’ daily travel behavior and hypothetical stated choice of carsharing opportunities. Moreover, it followed by an aggregate analysis at the end. The survey results indicate the following: two-thirds of the respondents 62% have been carpooling every day since 5 years or more, more than half of the respondents are not satisfied with current modes, besides other attributes the Traffic Congestion, Environment and Insufficient Public Transport were ranked the most critical in daily transportation by survey participants. Moreover, 68.24% of the respondent chose Carsharing over carpooling under different choice game scenarios. Overall, the findings in this research show that Kabul City is a potential underground for the introduction of Carsharing in the future. Taken together, insufficient public transit, dissatisfaction with current modes, and their stated interest will affect the future of carsharing positively in Kabul City. The modal choice in this study is limited to carpooling and carsharing; more choice sets, including bus, cycling, and walking, will have to be added to evaluate further.Keywords: carsharing, developing countries, Kabul Afghanistan, onboard carsharing survey, transportation, urban planning
Procedia PDF Downloads 135760 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms
Authors: Man-Yun Liu, Emily Chia-Yu Su
Abstract:
Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning
Procedia PDF Downloads 322759 Effects of Mild Heat Treatment on the Physical and Microbial Quality of Salak Apricot Cultivar
Authors: Bengi Hakguder Taze, Sevcan Unluturk
Abstract:
Şalak apricot (Prunus armeniaca L., cv. Şalak) is a specific variety grown in Igdir, Turkey. The fruit has distinctive properties distinguish it from other cultivars, such as its unique size, color, taste and higher water content. Drying is the widely used method for preservation of apricots. However, fresh consumption is preferred for Şalak apricot instead of drying due to its low dry matter content. Higher amounts of water in the structure and climacteric nature make the fruit sensitive against rapid quality loss during storage. Hence, alternative processing methods need to be introduced to extend the shelf life of the fresh produce. Mild heat (MH) treatment is of great interest as it can reduce the microbial load and inhibit enzymatic activities. Therefore, the aim of this study was to evaluate the impact of mild heat treatment on the natural microflora found on Şalak apricot surfaces and some physical quality parameters of the fruit, such as color and firmness. For this purpose, apricot samples were treated at different temperatures between 40 and 60 ℃ for different periods ranging between 10 to 60 min using a temperature controlled water bath. Natural flora on the fruit surfaces was examined using standard plating technique both before and after the treatment. Moreover, any changes in color and firmness of the fruit samples were also monitored. It was found that control samples were initially containing 7.5 ± 0.32 log CFU/g of total aerobic plate count (TAPC), 5.8±0.31 log CFU/g of yeast and mold count (YMC), and 5.17 ± 0.22 log CFU/g of coliforms. The highest log reductions in TAPC and YMC were observed as 3.87-log and 5.8-log after the treatments at 60 ℃ and 50 ℃, respectively. Nevertheless, the fruit lost its characteristic aroma at temperatures above 50 ℃. Furthermore, great color changes (ΔE ˃ 6) were observed and firmness of the apricot samples was reduced at these conditions. On the other hand, MH treatment at 41 ℃ for 10 min resulted in 1.6-log and 0.91-log reductions in TAPC and YMC, respectively, with slightly noticeable changes in color (ΔE ˂ 3). In conclusion, application of temperatures higher than 50 ℃ caused undesirable changes in physical quality of Şalak apricots. Although higher microbial reductions were achieved at those temperatures, temperatures between 40 and 50°C should be further investigated considering the fruit quality parameters. Another strategy may be the use of high temperatures for short time periods not exceeding 1-5 min. Besides all, MH treatment with UV-C light irradiation can be also considered as a hurdle strategy for better inactivation results.Keywords: color, firmness, mild heat, natural flora, physical quality, şalak apricot
Procedia PDF Downloads 137758 Solid Polymer Electrolyte Membranes Based on Siloxane Matrix
Authors: Natia Jalagonia, Tinatin Kuchukhidze
Abstract:
Polymer electrolytes (PE) play an important part in electrochemical devices such as batteries and fuel cells. To achieve optimal performance, the PE must maintain a high ionic conductivity and mechanical stability at both high and low relative humidity. The polymer electrolyte also needs to have excellent chemical stability for long and robustness. According to the prevailing theory, ionic conduction in polymer electrolytes is facilitated by the large-scale segmental motion of the polymer backbone, and primarily occurs in the amorphous regions of the polymer electrolyte. Crystallinity restricts polymer backbone segmental motion and significantly reduces conductivity. Consequently, polymer electrolytes with high conductivity at room temperature have been sought through polymers which have highly flexible backbones and have largely amorphous morphology. The interest in polymer electrolytes was increased also by potential applications of solid polymer electrolytes in high energy density solid state batteries, gas sensors and electrochromic windows. Conductivity of 10-3 S/cm is commonly regarded as a necessary minimum value for practical applications in batteries. At present, polyethylene oxide (PEO)-based systems are most thoroughly investigated, reaching room temperature conductivities of 10-7 S/cm in some cross-linked salt in polymer systems based on amorphous PEO-polypropylene oxide copolymers.. It is widely accepted that amorphous polymers with low glass transition temperatures Tg and a high segmental mobility are important prerequisites for high ionic conductivities. Another necessary condition for high ionic conductivity is a high salt solubility in the polymer, which is most often achieved by donors such as ether oxygen or imide groups on the main chain or on the side groups of the PE. It is well established also that lithium ion coordination takes place predominantly in the amorphous domain, and that the segmental mobility of the polymer is an important factor in determining the ionic mobility. Great attention was pointed to PEO-based amorphous electrolyte obtained by synthesis of comb-like polymers, by attaching short ethylene oxide unit sequences to an existing amorphous polymer backbone. The aim of presented work is to obtain of solid polymer electrolyte membranes using PMHS as a matrix. For this purpose the hydrosilylation reactions of α,ω-bis(trimethylsiloxy)methyl¬hydrosiloxane with allyl triethylene-glycol mo¬nomethyl ether and vinyltriethoxysilane at 1:28:7 ratio of initial com¬pounds in the presence of Karstedt’s catalyst, platinum hydrochloric acid (0.1 M solution in THF) and platinum on the carbon catalyst in 50% solution of anhydrous toluene have been studied. The synthesized olygomers are vitreous liquid products, which are well soluble in organic solvents with specific viscosity ηsp ≈ 0.05 - 0.06. The synthesized olygomers were analysed with FTIR, 1H, 13C, 29Si NMR spectroscopy. Synthesized polysiloxanes were investigated with wide-angle X-ray, gel-permeation chromatography, and DSC analyses. Via sol-gel processes of doped with lithium trifluoromethylsulfonate (triflate) or lithium bis¬(trifluoromethylsulfonyl)¬imide polymer systems solid polymer electrolyte membranes have been obtained. The dependence of ionic conductivity as a function of temperature and salt concentration was investigated and the activation energies of conductivity for all obtained compounds are calculatedKeywords: synthesis, PMHS, membrane, electrolyte
Procedia PDF Downloads 257757 Nitriding of Super-Ferritic Stainless Steel by Plasma Immersion Ion Implantation in Radio Frequency and Microwave Plasma System
Authors: H. Bhuyan, S. Mändl, M. Favre, M. Cisternas, A. Henriquez, E. Wyndham, M. Walczak, D. Manova
Abstract:
The 470 Li-24 Cr and 460Li-21 Cr are two alloys belonging to the next generation of super-ferritic nickel free stainless steel grades, containing titanium (Ti), niobium (Nb) and small percentage of carbon (C) and nitrogen (N). The addition of Ti and Nb improves in general the corrosion resistance while the low interstitial content of C and N assures finer precipitates and greater ductility compared to conventional ferritic grades. These grades are considered an economic alternative to AISI 316L and 304 due to comparable or superior corrosion. However, since 316L and 304 can be nitrided to improve the mechanical surface properties like hardness and wear; it is hypothesize that the tribological properties of these super-ferritic stainless steels grades can also be improved by plasma nitriding. Thus two sets of plasma immersion ion implantation experiments have been carried out, one with a high pressure capacitively coupled radio frequency plasma at PUC Chile and the other using a low pressure microwave plasma at IOM Leipzig, in order to explore further improvements in the mechanical properties of 470 Li-24 Cr and 460Li-21 Cr steel. Nitrided and unnitrided substrates have been subsequently investigated using different surface characterization techniques including secondary ion mass spectroscopy, scanning electron microscopy, energy dispersive x-ray analysis, Vickers hardness, wear resistance, as well as corrosion test. In most of the characterizations no major differences have been observed for nitrided 470 Li-24 Cr and 460Li-21 Cr. Due to the ion bombardment, an increase in the surface roughness is observed for higher treatment temperature, independent of the steel types. The formation of chromium nitride compound takes place only at a treatment temperature around 4000C-4500C, or above. However, corrosion properties deteriorate after treatment at higher temperatures. The physical characterization results show up to 25 at.% of nitrogen for a diffusion zone of 4-6 m, and a 4-5 times increase in hardness for different experimental conditions. The samples implanted with temperature higher than 400 °C presented a wear resistance around two orders of magnitude higher than the untreated substrates. The hardness is apparently affected by the different roughness of the samples and their different profile of nitrogen.Keywords: ion implantation, plasma, RF and microwave plasma, stainless steel
Procedia PDF Downloads 464