Search results for: extended Kalman filter
471 Collaborative Approaches in Achieving Sustainable Private-Public Transportation Services in Inner-City Areas: A Case of Durban Minibus Taxis
Authors: Lonna Mabandla, Godfrey Musvoto
Abstract:
Transportation is a catalytic feature in cities. Transport and land use activity are interdependent and have a feedback loop between how land is developed and how transportation systems are designed and used. This recursive relationship between land use and transportation is reflected in how public transportation routes internal to the inner-city enhance accessibility, therefore creating spaces that are conducive to business activity, while the business activity also informs public transportation routes. It is for this reason that the focus of this research is on public transportation within inner-city areas where the dynamic is evident. Durban is the chosen case study where the dominating form of public transportation within the central business district (CBD) is minibus taxis. The paradox here is that minibus taxis still form part of the informal economy even though they are the leading form of public transportation in South Africa. There have been many attempts to formalise this industry to follow more regulatory practices, but minibus taxis are privately owned, therefore complicating any proposed intervention. The argument of this study is that the application of collaborative planning through a sustainable partnership between the public and private sectors will improve the social and environmental sustainability of public transportation. One of the major challenges that exist within such collaborative endeavors is power dynamics. As a result, a key focus of the study is on power relations. Practically, power relations should be observed over an extended period, specifically when the different stakeholders engage with each other, to reflect valid data. However, a lengthy data collection process was not possible to observe during the data collection phase of this research. Instead, interviews were conducted focusing on existing procedural planning practices between the inner-city minibus taxi association (South and North Beach Taxi Association), the eThekwini Transport Authority (ETA), and the eThekwini Town Planning Department. Conclusions and recommendations were then generated based on these data.Keywords: collaborative planning, sustainability, public transport, minibus taxis
Procedia PDF Downloads 59470 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport
Authors: Marcel Schneider, Nils Nießen
Abstract:
Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations
Procedia PDF Downloads 328469 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System
Authors: Nicholas Pearce, Eun-jin Kim
Abstract:
Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.Keywords: cardiovascular system, left atrium, numerical model, MEF
Procedia PDF Downloads 114468 Trend Analysis of Rainfall: A Climate Change Paradigm
Authors: Shyamli Singh, Ishupinder Kaur, Vinod K. Sharma
Abstract:
Climate Change refers to the change in climate for extended period of time. Climate is changing from the past history of earth but anthropogenic activities accelerate this rate of change and which is now being a global issue. Increase in greenhouse gas emissions is causing global warming and climate change related issues at an alarming rate. Increasing temperature results in climate variability across the globe. Changes in rainfall patterns, intensity and extreme events are some of the impacts of climate change. Rainfall variability refers to the degree to which rainfall patterns varies over a region (spatial) or through time period (temporal). Temporal rainfall variability can be directly or indirectly linked to climate change. Such variability in rainfall increases the vulnerability of communities towards climate change. Increasing urbanization and unplanned developmental activities, the air quality is deteriorating. This paper mainly focuses on the rainfall variability due to increasing level of greenhouse gases. Rainfall data of 65 years (1951-2015) of Safdarjung station of Delhi was collected from Indian Meteorological Department and analyzed using Mann-Kendall test for time-series data analysis. Mann-Kendall test is a statistical tool helps in analysis of trend in the given data sets. The slope of the trend can be measured through Sen’s slope estimator. Data was analyzed monthly, seasonally and yearly across the period of 65 years. The monthly rainfall data for the said period do not follow any increasing or decreasing trend. Monsoon season shows no increasing trend but here was an increasing trend in the pre-monsoon season. Hence, the actual rainfall differs from the normal trend of the rainfall. Through this analysis, it can be projected that there will be an increase in pre-monsoon rainfall than the actual monsoon season. Pre-monsoon rainfall causes cooling effect and results in drier monsoon season. This will increase the vulnerability of communities towards climate change and also effect related developmental activities.Keywords: greenhouse gases, Mann-Kendall test, rainfall variability, Sen's slope
Procedia PDF Downloads 206467 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management
Authors: Berk Ecer, Ebru Akcapinar Sezer
Abstract:
Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach
Procedia PDF Downloads 139466 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients
Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming
Abstract:
Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry
Procedia PDF Downloads 294465 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 135464 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L. Pomel) in Tomato Crop
Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L, Tarantino E.
Abstract:
The Phelipanche ramosa is considered the most damaging obligate flowering parasitic weed on a wide species of cultivated plants. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation are due to the ability to produce high numbers of seeds (up to 200,000), that remain viable for extended period (more than 19 years). In this paper 13 treatments of parasitic weed control, as physical, chemical, biological and agronomic methods, including the use of the resistant plants, have been carried out. In 2014 a trial was performed on processing tomato (cv Docet), grown in pots filled with soil taken from a plot heavily infested by Phelipanche ramosa, at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy). Tomato seedlings were transplanted on August 8, 2014 on a clay soil (USDA) 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 70-75-81 and 88 days after transplantation the number of parasitic shoots emerged in each pot was detected. Also values of leaf chlorophyll Meter SPAD of tomato plants were measured. All data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. The results show lower values of the color index SPAD in tomato plants parasitized compared to those healthy. In addition, each treatment studied did not provide complete control against Phelipanche ramosa. However the virulence of the attacks was mitigated by some treatments: radicon product, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone and resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.Keywords: control methods, Phelipanche ramose, tomato crop
Procedia PDF Downloads 614463 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms
Authors: Man-Yun Liu, Emily Chia-Yu Su
Abstract:
Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning
Procedia PDF Downloads 322462 Abridging Pharmaceutical Analysis and Drug Discovery via LC-MS-TOF, NMR, in-silico Toxicity-Bioactivity Profiling for Therapeutic Purposing Zileuton Impurities: Need of Hour
Authors: Saurabh B. Ganorkar, Atul A. Shirkhedkar
Abstract:
The need for investigations protecting against toxic impurities though seems to be a primary requirement; the impurities which may prove non - toxic can be explored for their therapeutic potential if any to assist advanced drug discovery. The essential role of pharmaceutical analysis can thus be extended effectively to achieve it. The present study successfully achieved these objectives with characterization of major degradation products as impurities for Zileuton which has been used for to treat asthma since years. The forced degradation studies were performed to identify the potential degradation products using Ultra-fine Liquid-chromatography. Liquid-chromatography-Mass spectrometry (Time of Flight) and Proton Nuclear Magnetic Resonance Studies were utilized effectively to characterize the drug along with five major oxidative and hydrolytic degradation products (DP’s). The mass fragments were identified for Zileuton and path for the degradation was investigated. The characterized DP’s were subjected to In-Silico studies as XP Molecular Docking to compare the gain or loss in binding affinity with 5-Lipooxygenase enzyme. One of the impurity of was found to have the binding affinity more than the drug itself indicating for its potential to be more bioactive as better Antiasthmatic. The close structural resemblance has the ability to potentiate or reduce bioactivity and or toxicity. The chances of being active biologically at other sites cannot be denied and the same is achieved to some extent by predictions for probability of being active with Prediction of Activity Spectrum for Substances (PASS) The impurities found to be bio-active as Antineoplastic, Antiallergic, and inhibitors of Complement Factor D. The toxicological abilities as Ames-Mutagenicity, Carcinogenicity, Developmental Toxicity and Skin Irritancy were evaluated using Toxicity Prediction by Komputer Assisted Technology (TOPKAT). Two of the impurities were found to be non-toxic as compared to original drug Zileuton. As the drugs are purposed and repurposed effectively the impurities can also be; as they can have more binding affinity; less toxicity and better ability to be bio-active at other biological targets.Keywords: UFLC, LC-MS-TOF, NMR, Zileuton, impurities, toxicity, bio-activity
Procedia PDF Downloads 194461 Deconstruction of the Term 'Shaman' in the Metaphorical Pair 'Artist as a Shaman'
Authors: Ilona Ivova Anachkova
Abstract:
The analogy between the artist and the shaman as both being practitioners that more easily recognize and explore spiritual matters, and thus contribute to the society in a unique way has been implied in both Modernity and Postmodernity. The Romantic conception of the shaman as a great artist who helps common men see and understand messages of a higher consciousness has been employed throughout Modernity and is active even now. This paper deconstructs the term ‘shaman’ in the metaphorical analogy ‘artist – shaman’ that was developed more fully in Modernity in different artistic and scientific discourses. The shaman is a figure that to a certain extent adequately reflects the late modern and postmodern holistic views on the world. Such views aim at distancing from traditional religious and overly rationalistic discourses. However, the term ‘shaman’ can be well substituted by other concepts such as the priest, for example. The concept ‘shaman’ is based on modern ethnographic and historical investigations. Its later philosophical, psychological and artistic appropriations designate the role of the artist as a spiritual and cultural leader. However, the artist and the shaman are not fully interchangeable terms. The figure of the shaman in ‘primitive’ societies has performed many social functions that are now delegated to different institutions and positions. The shaman incorporates the functions of a judge, a healer. He is a link to divine entities. He is the creative, aspiring human being that has heightened sensitivity to the world in both its spiritual and material aspects. Building the metaphorical analogy between the shaman and the artist comes in many ways. Both are seen as healers of the society, having propensity towards connection to spiritual entities, or being more inclined to creativity than others. The ‘shaman’ however is a fashionable word for a spiritual person used perhaps because of the anti-traditionalist religious modern and postmodern views. The figure of the priest is associated with a too rational, theoretical and detached attitude towards spiritual matters, while the practices of the shaman and the artist are considered engaged with spirituality on a deeper existential level. The term ‘shaman’ however does not have priority of other words/figures that can explore and deploy spiritual aspects of reality. Having substituted the term ‘shaman’ in the pair ‘artist as a shaman’ with ‘the priest’ or literally ‘anybody,' we witness destruction of spiritual hierarchies and come to the view that everybody is responsible for their own spiritual and creative evolution.Keywords: artist as a shaman, creativity, extended theory of art, functions of art, priest as an artist
Procedia PDF Downloads 229460 Cybernetic Model-Based Optimization of a Fed-Batch Process for High Cell Density Cultivation of E. Coli In Shake Flasks
Authors: Snehal D. Ganjave, Hardik Dodia, Avinash V. Sunder, Swati Madhu, Pramod P. Wangikar
Abstract:
Batch cultivation of recombinant bacteria in shake flasks results in low cell density due to nutrient depletion. Previous protocols on high cell density cultivation in shake flasks have relied mainly on controlled release mechanisms and extended cultivation protocols. In the present work, we report an optimized fed-batch process for high cell density cultivation of recombinant E. coli BL21(DE3) for protein production. A cybernetic model-based, multi-objective optimization strategy was implemented to obtain the optimum operating variables to achieve maximum biomass and minimized substrate feed rate. A syringe pump was used to feed a mixture of glycerol and yeast extract into the shake flask. Preliminary experiments were conducted with online monitoring of dissolved oxygen (DO) and offline measurements of biomass and glycerol to estimate the model parameters. Multi-objective optimization was performed to obtain the pareto front surface. The selected optimized recipe was tested for a range of proteins that show different extent soluble expression in E. coli. These included eYFP and LkADH, which are largely expressed in soluble fractions, CbFDH and GcanADH , which are partially soluble, and human PDGF, which forms inclusion bodies. The biomass concentrations achieved in 24 h were in the range 19.9-21.5 g/L, while the model predicted value was 19.44 g/L. The process was successfully reproduced in a standard laboratory shake flask without online monitoring of DO and pH. The optimized fed-batch process showed significant improvement in both the biomass and protein production of the tested recombinant proteins compared to batch cultivation. The proposed process will have significant implications in the routine cultivation of E. coli for various applications.Keywords: cybernetic model, E. coli, high cell density cultivation, multi-objective optimization
Procedia PDF Downloads 257459 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 273458 Prediction of Springback in U-bending of W-Temper AA6082 Aluminum Alloy
Authors: Jemal Ebrahim Dessie, Lukács Zsolt
Abstract:
High-strength aluminum alloys have drawn a lot of attention because of the expanding demand for lightweight vehicle design in the automotive sector. Due to poor formability at room temperature, warm and hot forming have been advised. However, warm and hot forming methods need more steps in the production process and an advanced tooling system. In contrast, since ordinary tools can be used, forming sheets at room temperature in the W temper condition is advantageous. However, springback of supersaturated sheets and their thinning are critical challenges and must be resolved during the use of this technique. In this study, AA6082-T6 aluminum alloy was solution heat treated at different oven temperatures and times using a specially designed and developed furnace in order to optimize the W-temper heat treatment temperature. A U-shaped bending test was carried out at different time periods between W-temper heat treatment and forming operation. Finite element analysis (FEA) of U-bending was conducted using AutoForm aiming to validate the experimental result. The uniaxial tensile and unload test was performed in order to determine the kinematic hardening behavior of the material and has been optimized in the Finite element code using systematic process improvement (SPI). In the simulation, the effect of friction coefficient & blank holder force was considered. Springback parameters were evaluated by the geometry adopted from the NUMISHEET ’93 benchmark problem. It is noted that the change of shape was higher at the more extended time periods between W-temper heat treatment and forming operation. Die radius was the most influential parameter at the flange springback. However, the change of shape shows an overall increasing tendency on the sidewall as the increase of radius of the punch than the radius of the die. The springback angles on the flange and sidewall seem to be highly influenced by the coefficient of friction than blank holding force, and the effect becomes increases as increasing the blank holding force.Keywords: aluminum alloy, FEA, springback, SPI, U-bending, W-temper
Procedia PDF Downloads 100457 Inverterless Grid Compatible Micro Turbine Generator
Authors: S. Ozeri, D. Shmilovitz
Abstract:
Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.Keywords: gas turbine, inverter, power multiplier, distributed generation
Procedia PDF Downloads 238456 Effect of Plant Growth Promoting Rhizobacteria on the Germination and Early Growth of Onion (Allium cepa)
Authors: Dragana R. Stamenov, Simonida S. Djuric, Timea Hajnal Jafari
Abstract:
Plant growth promoting rhizobacteria (PGPR) are a heterogeneous group of bacteria that can be found in the rhizosphere, at root surfaces and in association with roots, enhancing the growth of the plant either directly and/or indirectly. Increased crop productivity associated with the presence of PGPR has been observed in a broad range of plant species, such as raspberry, chickpeas, legumes, cucumber, eggplant, pea, pepper, radish, tobacco, tomato, lettuce, carrot, corn, cotton, millet, bean, cocoa, etc. However, until now there has not been much research about influences of the PGPR on the growth and yield of onion. Onion (Allium cepa L.), of the Liliaceae family, is a species of great economic importance, widely cultivated all over the world. The aim of this research was to examine the influence of plant growth promoting bacteria Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, Bacillus subtillis and Azotobacter sp. on the seed germination and early growth of onion (Allium cepa). PGPR Azotobacter sp., Bacillus subtilis, Pseudomonas sp. Dragana, Pseudomonas sp. Kiš, from the collection of the Faculty of Agriculture, Novi Sad, Serbia, were used as inoculants. The number of cells in 1 ml of the inoculum was 10⁸ CFU/ml. The control variant was not inoculated. The effect of PGPR on seed germination and hypocotyls length of Allium cepa was evaluated in controlled conditions, on filter paper in the dark at 22°C, while effect on the plant length and mass in semicontrol conditions, in 10 l volume vegetative pots. Seed treated with fungicide and untreated seed were used. After seven days the percentage of germination was determined. After seven and fourteen days hypocotil length was measured. Fourteen days after germination, length and mass of plants were measured. Application of Pseudomonas sp. Dragana and Kiš and Bacillus subtillis had a negative effect on onion seed germination, while the use of Azotobacter sp. gave positive results. On average, application of all investigated inoculants had a positive effect on the measured parameters of plant growth. Azotobacter sp. had the greatest effect on the hypocotyls length, length and mass of the plant. In average, better results were achieved with untreated seeds in compare with treated. Results of this study have shown that PGPR can be used in the production of onion.Keywords: germination, length, mass, microorganisms, onion
Procedia PDF Downloads 237455 Flood Simulation and Forecasting for Sustainable Planning of Response in Municipalities
Authors: Mariana Damova, Stanko Stankov, Emil Stoyanov, Hristo Hristov, Hermand Pessek, Plamen Chernev
Abstract:
We will present one of the first use cases on the DestinE platform, a joint initiative of the European Commission, European Space Agency and EUMETSAT, providing access to global earth observation, meteorological and statistical data, and emphasize the good practice of intergovernmental agencies acting in concert. Further, we will discuss the importance of space-bound disruptive solutions for improving the balance between the ever-increasing water-related disasters coming from climate change and minimizing their economic and societal impact. The use case focuses on forecasting floods and estimating the impact of flood events on the urban environment and the ecosystems in the affected areas with the purpose of helping municipal decision-makers to analyze and plan resource needs and to forge human-environment relationships by providing farmers with insightful information for improving their agricultural productivity. For the forecast, we will adopt an EO4AI method of our platform ISME-HYDRO, in which we employ a pipeline of neural networks applied to in-situ measurements and satellite data of meteorological factors influencing the hydrological and hydrodynamic status of rivers and dams, such as precipitations, soil moisture, vegetation index, snow cover to model flood events and their span. ISME-HYDRO platform is an e-infrastructure for water resources management based on linked data, extended with further intelligence that generates forecasts with the method described above, throws alerts, formulates queries, provides superior interactivity and drives communication with the users. It provides synchronized visualization of table views, graphviews and interactive maps. It will be federated with the DestinE platform.Keywords: flood simulation, AI, Earth observation, e-Infrastructure, flood forecasting, flood areas localization, response planning, resource estimation
Procedia PDF Downloads 21454 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation
Authors: Khashayar Nasrifar
Abstract:
Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.Keywords: correlation, corresponding state principle, ionic liquid, density
Procedia PDF Downloads 126453 Effects of Partial Sleep Deprivation on Prefrontal Cognitive Functions in Adolescents
Authors: Nurcihan Kiris
Abstract:
Restricted sleep is common in young adults and adolescents. The results of a few objective studies of sleep deprivation on cognitive performance were not clarified. In particular, the effect of sleep deprivation on the cognitive functions associated with frontal lobe such as attention, executive functions, working memory is not well known. The aim of this study is to investigate the effect of partial sleep deprivation experimentally in adolescents on the cognitive tasks of frontal lobe including working memory, strategic thinking, simple attention, continuous attention, executive functions, and cognitive flexibility. Subjects of the study were recruited from voluntary students of Cukurova University. Eighteen adolescents underwent four consecutive nights of monitored sleep restriction (6–6.5 hr/night) and four nights of sleep extension (10–10.5 hr/night), in counterbalanced order, and separated by a washout period. Following each sleep period, cognitive performance was assessed, at a fixed morning time, using a computerized neuropsychological battery based on frontal lobe functions task, a timed test providing both accuracy and reaction time outcome measures. Only the spatial working memory performance of cognitive tasks was found to be statistically lower in a restricted sleep condition than the extended sleep condition. On the other hand, there was no significant difference in the performance of cognitive tasks evaluating simple attention, constant attention, executive functions, and cognitive flexibility. It is thought that especially the spatial working memory and strategic thinking skills of adolescents may be susceptible to sleep deprivation. On the other hand, adolescents are predicted to be optimally successful in ideal sleep conditions, especially in the circumstances requiring for the short term storage of visual information, processing of stored information, and strategic thinking. The findings of this study may also be associated with possible negative functional effects on the processing of academic social and emotional inputs in adolescents for partial sleep deprivation. Acknowledgment: This research was supported by Cukurova University Scientific Research Projects Unit.Keywords: attention, cognitive functions, sleep deprivation, working memory
Procedia PDF Downloads 154452 In-Situ Sludge Minimization Using Integrated Moving Bed Biofilm Reactor for Industrial Wastewater Treatment
Authors: Vijay Sodhi, Charanjit Singh, Neelam Sodhi, Puneet P. S. Cheema, Reena Sharma, Mithilesh K. Jha
Abstract:
The management and secure disposal of the biosludge generated from widely commercialized conventional activated sludge (CAS) treatments become a potential environmental issue. Thus, a sustainable technological upgradation to the CAS for sludge yield minimization has recently been gained serious attention of the scientific community. A number of recently reported studies effectively addressed the remedial technological advancements that in monopoly limited to the municipal wastewater. Moreover, the critical review of the literature signifies side-stream sludge minimization as a complex task to maintain. In this work, therefore, a hybrid moving bed biofilm reactor (MBBR) configuration (named as AMOMOX process) for in-situ minimization of the excess biosludge generated from high organic strength tannery wastewater has been demonstrated. The AMOMOX collectively stands for anoxic MBBR (as AM), aerobic MBBR (OM) and an oxic CAS (OX). The AMOMOX configuration involved a combined arrangement of an anoxic MBBR and oxic MBBR coupled with the aerobic CAS. The AMOMOX system was run in parallel with an identical CAS reactor. Both system configurations were fed with same influent to judge the real-time operational changes. For the AMOMOX process, the strict maintenance of operational strategies resulted about 95% removal of NH4-N and SCOD from tannery wastewater. Here, the nourishment of filamentous microbiota and purposeful promotion of cell-lysis effectively sustained sludge yield (Yobs) lowering upto 0.51 kgVSS/kgCOD. As a result, the volatile sludge scarcity apparent in the AMOMOX system succeeded upto 47% reduction of the excess biosludge. The corroborated was further supported by FE-SEM imaging and thermogravimetric analysis. However, the detection of microbial strains habitat underlying extended SRT (23-26 days) of the AMOMOX system would be the matter of further research.Keywords: tannery wastewater, moving bed biofilm reactor, sludhe yield, sludge minimization, solids retention time
Procedia PDF Downloads 71451 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter
Authors: Bartosz Kedra, Robert Malkowski
Abstract:
This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer
Procedia PDF Downloads 323450 Intervention Program for Emotional Management in Disruptive Situations Through Self-Compassion and Compassion
Authors: M. Bassas, J. Grané-Morcillo, J. Segura, J. M. Soldevila
Abstract:
Mental health prevention is key in a society where, according to the World Health Organization, the fourth leading cause of death worldwide is suicide. Compassion is closely linked to personal growth. It shows once again that therapies based on prevention remain an urgent and social need. In this sense, a growing body of research demonstrates how cultivating a compassionate mind can help alleviate and prevent a variety of psychological problems. In the early 21st century, there has been a boom in third-generation compassion-based therapies, although there is a lack of empirical evidence of their efficacy. This study proposes a psychotherapy method (‘Being Method’), whose central axis revolves around emotional management through the cultivation of compassion. Therefore, the objective of this research was to analyze the effectiveness of this method with regard to the emotional changes experienced when we focus on what we are concerned about through the filter of compassion. The Being Method was born from the influence of Buddhist philosophy and contemporary psychology based mainly on Western rationalist currents. A quantitative cross-sectional study has been carried out in a sample of women between 18 and 53 years old (n=47; Mage=36.02; SDage= 11.86) interested in personal growth in which the following 6 measuring instruments were administered: Peace of mind Scale (PoM), Rosenberg Self-Esteem Scale (RSES), Subjective Happiness Scale (SHS), 2 Sacles of the Compassionate Action and Engagement Scales (CAES), Coping Response Inventory for Adults (CRI-A) and Cognitive-Behavioral Strategies Evaluation Scale (MOLDES). Following an experimental method approach, participants were divided into an experimental and control group. Longitudinal analysis was also carried out through a pre-post program comparison. Pre-post comparison outcomes indicated significant differences (p<.05) between before and after the therapy in the variables Peace of Mind, Self-esteem, Happiness, Self-compassion (A-B), Compassion (A-B), in several mental molds, as well as in several coping strategies. Also, between-groups tests proved significantly higher means obtained in the experimental group. Thus, these outcomes highlighted the effectiveness of the therapy, improving all the analyzed dimensions. The social, clinical and research implications are discussed.Keywords: being method, compassion, effectiveness, emotional management, intervention program, personal growth therapy
Procedia PDF Downloads 41449 3D Interpenetrated Network Based on 1,3-Benzenedicarboxylate and 1,2-Bis(4-Pyridyl) Ethane
Authors: Laura Bravo-García, Gotzone Barandika, Begoña Bazán, M. Karmele Urtiaga, Luis M. Lezama, María I. Arriortua
Abstract:
Solid coordination networks (SCNs) are materials consisting of metal ions or clusters that are linked by polyfunctional organic ligands and can be designed to form tridimensional frameworks. Their structural features, as for example high surface areas, thermal stability, and in other cases large cavities, have opened a wide range of applications in fields like drug delivery, host-guest chemistry, biomedical imaging, chemical sensing, heterogeneous catalysis and others referred to greenhouse gases storage or even separation. In this sense, the use of polycarboxylate anions and dipyridyl ligands is an effective strategy to produce extended structures with the needed characteristics for these applications. In this context, a novel compound, [Cu4(m-BDC)4(bpa)2DMF]•DMF has been obtained by microwave synthesis, where m-BDC is 1,3-benzenedicarboxylate and bpa 1,2-bis(4-pyridyl)ethane. The crystal structure can be described as a three dimensional framework formed by two equal, interpenetrated networks. Each network consists of two different CuII dimers. Dimer 1 have two coppers with a square pyramidal coordination, and dimer 2 have one with a square pyramidal coordination and other with octahedral one, the last dimer is unique in literature. Therefore, the combination of both type of dimers is unprecedented. Thus, benzenedicarboxylate ligands form sinusoidal chains between the same type of dimers, and also connect both chains forming these layers in the (100) plane. These layers are connected along the [100] direction through the bpa ligand, giving rise to a 3D network with 10 Å2 voids in average. However, the fact that there are two interpenetrated networks results in a significant reduction of the available volume. Structural analysis was carried out by means of single crystal X-ray diffraction and IR spectroscopy. Thermal and magnetic properties have been measured by means of thermogravimetry (TG), X-ray thermodiffractometry (TDX), and electron paramagnetic resonance (EPR). Additionally, CO2 and CH4 high pressure adsorption measurements have been carried out for this compound.Keywords: gas adsorption, interpenetrated networks, magnetic measurements, solid coordination network (SCN), thermal stability
Procedia PDF Downloads 323448 Effect of Mistranslating tRNA Alanine on Polyglutamine Aggregation
Authors: Sunidhi Syal, Rasangi Tennakoon, Patrick O'Donoghue
Abstract:
Polyglutamine (polyQ) diseases are a group of diseases related to neurodegeneration caused by repeats of the amino acid glutamine (Q) in the DNA, which translates into an elongated polyQ tract in the protein. The pathological explanation is that the polyQ tract forms cytotoxic aggregates in the neurons, leading to their degeneration. There are no cures or preventative efforts established for these diseases as of today, although the symptoms of these diseases can be relieved. This study specifically focuses on Huntington's disease, which is a type of polyQ disease in which aggregation is caused by the extended cytosine, adenine, guanine (CUG) codon repeats in the huntingtin (HTT) gene, which encodes for the huntingtin protein. Using this principle, we attempted to create six models, which included mutating wildtype tRNA alanine variant tRNA-AGC-8-1 to have glutamine anticodons CUG and UUG so serine is incorporated at glutamine sites in poly Q tracts. In the process, we were successful in obtaining tAla-8-1 CUG mutant clones in the HTTexon1 plasmids with a polyQ tract of 23Q (non-pathogenic model) and 74Q (disease model). These plasmids were transfected into mouse neuroblastoma cells to characterize protein synthesis and aggregation in normal and mistranslating cells and to investigate the effects of glutamines replaced with alanines on the disease phenotype. Notably, we observed no noteworthy differences in mean fluorescence between the CUG mutants for 23Q or 74Q; however, the Triton X-100 assay revealed a significant reduction in insoluble 74Q aggregates. We were unable to create a tAla-8-1 UUG mutant clone, and determining the difference in the effects of the two glutamine anticodons may enrich our understanding of the disease phenotype. In conclusion, by generating structural disruption with the amino acid alanine, it may be possible to find ways to minimize the toxicity of Huntington's disease caused by these polyQ aggregates. Further research is needed to advance knowledge in this field by identifying the cellular and biochemical impact of specific tRNA variants found naturally in human genomes.Keywords: Huntington's disease, polyQ, tRNA, anticodon, clone, overlap PCR
Procedia PDF Downloads 41447 The Material-Process Perspective: Design and Engineering
Authors: Lars Andersen
Abstract:
The development of design and engineering in large construction projects are characterized by an increased degree of flattening out of formal structures, extended use of parallel and integrated processes (‘Integrated Concurrent Engineering’) and an increased number of expert disciplines. The integration process is based on ongoing collaborations, dialogues, intercommunication and comments on each other’s work (iterations). This process based on reciprocal communication between actors and disciplines triggers value creation. However, communication between equals is not in itself sufficient to create effective decision making. The complexity of the process and time pressure contribute to an increased risk of a deficit of decisions and loss of process control. The paper refers to a study that aims at developing a resilient decision-making system that does not come in conflict with communication processes based on equality between the disciplines in the process. The study includes the construction of a hospital, following the phases design, engineering and physical building. The Research method is a combination of formative process research, process tracking and phenomenological analyses. The study tracked challenges and problems in the building process to the projection substrates (drawing and models) and further to the organization of the engineering and design phase. A comparative analysis of traditional and new ways of organizing the projecting made it possible to uncover an implicit material order or structure in the process. This uncovering implied a development of a material process perspective. According to this perspective the complexity of the process is rooted in material-functional differentiation. This differentiation presupposes a structuring material (the skeleton of the building) that coordinates the other types of material. Each expert discipline´s competence is related to one or a set of materials. The architect, consulting engineer construction etc. have their competencies related to structuring material, and inherent in this; coordination competence. When dialogues between the disciplines concerning the coordination between them do not result in agreement, the disciplines with responsibility for the structuring material decide the interface issues. Based on these premises, this paper develops a self-organized expert-driven interdisciplinary decision-making system.Keywords: collaboration, complexity, design, engineering, materiality
Procedia PDF Downloads 221446 Identifying the Factors that Influence Water-Use Efficiency in Agriculture: Case Study in a Spanish Semi-Arid Region
Authors: Laura Piedra-Muñoz, Ángeles Godoy-Durán, Emilio Galdeano-Gómez, Juan C. Pérez-Mesa
Abstract:
The current agricultural system in some arid and semi-arid areas is not sustainable in the long term. In southeast Spain, groundwater is the main water source and is overexploited, while alternatives like desalination are still limited. The Water Plan for the Mediterranean Basins 2015-2020 indicates a global deficit of 73.42 hm3 and an overexploitation of the aquifers of 205.58hm3. In order to solve this serious problem, two major actions can be taken: increasing available water, and/or improving the efficiency of its use. This study focuses on the latter. The main aim of this study is to present the major factors related to water usage efficiency in farming. It focuses on Almería province, southeast Spain, one of the most arid areas of the country, and in particular on family farms as the main direct managers of water use in this zone. Many of these farms are among the most water efficient in Spanish agriculture, but this efficiency is not generalized throughout the sector. This work conducts a comprehensive assessment of water performance in this area, using on-farm water-use, structural, socio-economic and environmental information. Two statistical techniques are used: descriptive analysis and cluster analysis. Thus, two groups are identified: the least and the most efficient farms regarding water usage. By analyzing both the common characteristics within each group and the differences between the groups with a one-way ANOVA analysis, several conclusions can be reached. The main differences between the two clusters center on the extent to which innovation and new technologies are used in irrigation. The most water efficient farms are characterized by more educated farmers, a greater degree of innovation, new irrigation technology, specialized production and awareness of water issues and environmental sustainability. The research shows that better practices and policies can have a substantial impact on achieving a more sustainable and efficient use of water. The findings of this study can be extended to farms in similar arid and semi-arid areas and contribute to foster appropriate policies to improve the efficiency of water usage in the agricultural sector.Keywords: cluster analysis, family farms, Spain, water-use efficiency
Procedia PDF Downloads 288445 Mn3O4 anchored Broccoli-Flower like Nickel Manganese Selenide Composite for Ultra-efficient Solid-State Hybrid Supercapacitors with Extended Durability
Authors: Siddhant Srivastav, Shilpa Singh, Sumanta Kumar Meher
Abstract:
Innovative renewable energy sources for energy storage/conversion is the demand of the current scenario in electrochemical machinery. In this context, choosing suitable organic precipitants for tuning the crystal characteristics and microstructures is a challenge. On the same note, herein we report broccoli flower-like porous Mn3O4/NiSe2−MnSe2 composite synthesized using a simple two step hydrothermal synthesis procedure assisted by sluggish precipitating agent and an effective cappant followed by intermediated anion exchange. The as-synthesized material was exposed to physical and chemical measurements depicting poly-crystallinity, stronger bonding and broccoli flower-like porous arrangement. The material was assessed electrochemically by cyclic voltammetry (CV), chronopotentiometry (CP) and electrochemical impedance spectroscopy (EIS) measurements. The Electrochemical studies reveal redox behavior, supercapacitive charge-discharge shape and extremely low charge transfer resistance. Further, the fabricated Mn3O4/NiSe2−MnSe2 composite based solid-state hybrid supercapacitor (Mn3O4/NiSe2−MnSe2 ||N-rGO) delivers excellent rate specific capacity, very low internal resistance, with energy density (~34 W h kg–1) of a typical rechargeable battery and power density (11995 W kg–1) of an ultra-supercapacitor. Consequently, it can be a favorable contender for supercapacitor applications for high performance energy storage utilizations. A definitive exhibition of the supercapacitor device is credited to electrolyte-ion buffering reservior alike behavior of broccoli flower like Mn3O4/NiSe2−MnSe2, enhanced by upgraded electronic and ionic conductivities of N- doped rGO (negative electrode) and PVA/KOH gel (electrolyte separator), respectivelyKeywords: electrolyte-ion buffering reservoir, intermediated-anion exchange, solid-state hybrid supercapacitor, supercapacitive charge-dischargesupercapacitive charge-discharge
Procedia PDF Downloads 75444 Biogas Potential of Deinking Sludge from Wastepaper Recycling Industry: Influence of Dewatering Degree and High Calcium Carbonate Content
Authors: Moses Kolade Ogun, Ina Korner
Abstract:
To improve on the sustainable resource management in the wastepaper recycling industry, studies into the valorization of wastes generated by the industry are necessary. The industry produces different residues, among which is the deinking sludge (DS). The DS is generated from the deinking process and constitutes a major fraction of the residues generated by the European pulp and paper industry. The traditional treatment of DS by incineration is capital intensive due to energy requirement for dewatering and the need for complementary fuel source due to DS low calorific value. This could be replaced by a biotechnological approach. This study, therefore, investigated the biogas potential of different DS streams (different dewatering degrees) and the influence of the high calcium carbonate content of DS on its biogas potential. Dewatered DS (solid fraction) sample from filter press and the filtrate (liquid fraction) were collected from a partner wastepaper recycling company in Germany. The solid fraction and the liquid fraction were mixed in proportion to realize DS with different water content (55–91% fresh mass). Spiked samples of DS using deionized water, cellulose and calcium carbonate were prepared to simulate DS with varying calcium carbonate content (0– 40% dry matter). Seeding sludge was collected from an existing biogas plant treating sewage sludge in Germany. Biogas potential was studied using a 1-liter batch test system under the mesophilic condition and ran for 21 days. Specific biogas potential in the range 133- 230 NL/kg-organic dry matter was observed for DS samples investigated. It was found out that an increase in the liquid fraction leads to an increase in the specific biogas potential and a reduction in the absolute biogas potential (NL-biogas/ fresh mass). By comparing the absolute biogas potential curve and the specific biogas potential curve, an optimal dewatering degree corresponding to a water content of about 70% fresh mass was identified. This degree of dewatering is a compromise when factors such as biogas yield, reactor size, energy required for dewatering and operation cost are considered. No inhibitory influence was observed in the biogas potential of DS due to the reported high calcium carbonate content of DS. This study confirms that DS is a potential bioresource for biogas production. Further optimization such as nitrogen supplementation due to DS high C/N ratio can increase biogas yield.Keywords: biogas, calcium carbonate, deinking sludge, dewatering, water content
Procedia PDF Downloads 182443 Four-Electron Auger Process for Hollow Ions
Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola
Abstract:
A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method
Procedia PDF Downloads 153442 Metallic and Semiconductor Thin Film and Nanoparticles for Novel Applications
Authors: Hanan. Al Chaghouri, Mohammad Azad Malik, P. John Thomas, Paul O’Brien
Abstract:
The process of assembling metal nanoparticles at the interface of two liquids has received a great interest over the past few years due to a wide range of important applications and their unusual properties compared to bulk materials. We present a low cost, simple and cheap synthesis of metal nanoparticles, core/shell structures and semiconductors followed by assembly of these particles between immiscible liquids. The aim of this talk is divided to three parts: firstly, to describe the achievement of a closed loop recycling for producing cadmium sulphide as powders and/or nanostructured thin films for solar cells or other optoelectronic devices applications by using a different chain length of commercially available secondary amines of dithiocarbamato complexes. The approach can be extended to other metal sulphides such as those of Zn, Pb, Cu, or Fe and many transition metals and oxides. Secondly, to synthesis significantly cheaper magnetic particles suited for the mass market. Ni/NiO nanoparticles with ferromagnetic properties at room temperature were among the smallest and strongest magnets (5 nm) were made in solution. The applications of this work can be applied to produce viable storage devices and the other possibility is to disperse these nanocrystals in solution and use it to make ferro-fluids which have a number of mature applications. The third part is about preparing and assembling of submicron silver, cobalt and nickel particles by using polyol methods and liquid/liquid interface, respectively. Noble metal like gold, copper and silver are suitable for plasmonic thin film solar cells because of their low resistivity and strong interactions with visible light waves. Silver is the best choice for solar cell application since it has low absorption losses and high radiative efficiency compared to gold and copper. Assembled cobalt and nickel as films are promising for spintronic, magnetic and magneto-electronic and biomedics.Keywords: assembling nanoparticles, liquid/liquid interface, thin film, core/shell, solar cells, recording media
Procedia PDF Downloads 301