Search results for: generalized asymmetric laplace distribution
457 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 257456 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal
Authors: A. D. Rao, Sachiko Mohanty
Abstract:
The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal
Procedia PDF Downloads 170455 Approximate Spring Balancing for the Arm of a Humanoid Robot to Reduce Actuator Torque
Authors: Apurva Patil, Ashay Aswale, Akshay Kulkarni, Shubham Bharadiya
Abstract:
The potential benefit of gravity compensation of linkages in mechanisms using springs to reduce actuator requirements is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs with child–parent connections and no auxiliary links. Application of this method to the developed arm of a humanoid robot is presented here. Spring balancing is particularly important in this case because the serial chain of linkages has to work against gravity.This work involves approximate spring balancing of the open-loop chain of linkages using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Reduced actuator torque facilitates the use of lower end actuators which are generally smaller in weight and volume thereby lowering the space requirements and the total weight of the arm. This is particularly important for humanoid robots where the parent actuator has to handle the weight of the subsequent actuators as well. Actuators with lower actuation requirements are more energy efficient, thereby reduce the energy consumption of the mechanism. Lower end actuators are lower in cost and facilitate the development of low-cost devices. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free-length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached to the preceding parent link, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant arm of the humanoid robot is compact. The cost benefits and reduced complexity can be significant advantages in the development of this arm of the humanoid robot.Keywords: actuator torque, child-parent connections, spring balancing, the arm of a humanoid robot
Procedia PDF Downloads 245454 Use of Cassava Waste and Its Energy Potential
Authors: I. Inuaeyen, L. Phil, O. Eni
Abstract:
Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.Keywords: bio-refinery, cassava waste, energy, process modelling
Procedia PDF Downloads 376453 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms
Authors: Dimitrios Kafetzopoulos
Abstract:
Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.Keywords: incremental operational changes, radical operational changes, efficiency, sustainability
Procedia PDF Downloads 136452 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria
Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale
Abstract:
The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.Keywords: allocative efficiency, DEA, Tobit regression, tuber crop
Procedia PDF Downloads 290451 Evaluation of Role of Surgery in Management of Pediatric Germ Cell Tumors According to Risk Adapted Therapy Protocols
Authors: Ahmed Abdallatif
Abstract:
Background: Patients with malignant germ cell tumors have age distribution in two peaks, with the first one during infancy and the second after the onset of puberty. Gonadal germ cell tumors are the most common malignant ovarian tumor in females aged below twenty years. Sacrococcygeal and retroperitoneal abdominal tumors usually presents in a large size before the onset of symptoms. Methods: Patients with pediatric germ cell tumors presenting to Children’s Cancer Hospital Egypt and National Cancer Institute Egypt from January 2008 to June 2011 Patients underwent stratification according to risk into low, intermediate and high risk groups according to children oncology group classification. Objectives: Assessment of the clinicopathologic features of all cases of pediatric germ cell tumors and classification of malignant cases according to their stage, and the primary site to low, intermediate and high risk patients. Evaluation of surgical management in each group of patients focusing on surgical approach, the extent of surgical resection according to each site, ability to achieve complete surgical resection and perioperative complications. Finally, determination of the three years overall and disease-free survival in different groups and the relation to different prognostic factors including the extent of surgical resection. Results: Out of 131 cases surgically explored only 26 cases had re exploration with 8 cases explored for residual disease 9 cases for remote recurrence or metastatic disease and the other 9 cases for other complications. Patients with low risk kept under follow up after surgery, out of those of low risk group (48 patients) only 8 patients (16.5%) shifted to intermediate risk. There were 20 patients (14.6%) diagnosed as intermediate risk received 3 cycles of compressed (Cisplatin, Etoposide and Bleomycin) and all high risk group patients 69patients (50.4%) received chemotherapy. Stage of disease was strongly and significantly related to overall survival with a poorer survival in late stages (stage IV) as compared to earlier stages. Conclusion: Overall survival rate at 3 three years was (76.7% ± 5.4, 3) years EFS was (77.8 % ±4.0), however 3 years DFS was much better (89.8 ± 3.4) in whole study group with ovarian tumors had significantly higher Overall survival (90% ± 5.1). Event Free Survival analysis showed that Male gender was 3 times likely to have bad events than females. Patients who underwent incomplete resection were 4 times more than patients with complete resection to have bad events. Disease free survival analysis showed that Patients who underwent incomplete surgery were 18.8 times liable for recurrence compared to those who underwent complete surgery, and patients who were exposed to re-excision were 21 times more prone to recurrence compared to other patients.Keywords: extragonadal, germ cell tumors, gonadal, pediatric
Procedia PDF Downloads 219450 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 144449 Drug Delivery Cationic Nano-Containers Based on Pseudo-Proteins
Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava
Abstract:
The elaboration of effective drug delivery vehicles is still topical nowadays since targeted drug delivery is one of the most important challenges of the modern nanomedicine. The last decade has witnessed enormous research focused on synthetic cationic polymers (CPs) due to their flexible properties, in particular as non-viral gene delivery systems, facile synthesis, robustness, not oncogenic and proven gene delivery efficiency. However, the toxicity is still an obstacle to the application in pharmacotherapy. For overcoming the problem, creation of new cationic compounds including the polymeric nano-size particles – nano-containers (NCs) loading with different pharmaceuticals and biologicals is still relevant. In this regard, a variety of NCs-based drug delivery systems have been developed. We have found that amino acid-based biodegradable polymers called as pseudo-proteins (PPs), which can be cleared from the body after the fulfillment of their function are highly suitable for designing pharmaceutical NCs. Among them, one of the most promising are NCs made of biodegradable Cationic PPs (CPPs). For preparing new cationic NCs (CNCs), we used CPPs composed of positively charged amino acid L-arginine (R). The CNCs were fabricated by two approaches using: (1) R-based homo-CPPs; (2) Blends of R-based CPPs with regular (neutral) PPs. According to the first approach NCs we prepared from CPPs 8R3 (composed of R, sebacic acid and 1,3-propanediol) and 8R6 (composed of R, sebacic acid and 1,6-hexanediol). The NCs prepared from these CPPs were 72-101 nm in size with zeta potential within +30 ÷ +35 mV at a concentration 6 mg/mL. According to the second approach, CPPs 8R6 was blended in organic phase with neutral PPs 8L6 (composed of leucine, sebacic acid and 1,6-hexanediol). The NCs prepared from the blends were 130-140 nm in size with zeta potential within +20 ÷ +28 mV depending on 8R6/8L6 ratio. The stability studies of fabricated NCs showed that no substantial change of the particle size and distribution and no big particles’ formation is observed after three months storage. In vitro biocompatibility study of the obtained NPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed both type cathionic NCs are biocompatible. The obtained data allow concluding that the obtained CNCs are promising for the application as biodegradable drug delivery vehicles. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 'New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications'.Keywords: biodegradable polymers, cationic pseudo-proteins, nano-containers, drug delivery vehicles
Procedia PDF Downloads 156448 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 418447 Development a Home-Hotel-Hospital-School Community-Based Palliative Care Model for Patients with Cancer in Suratthani, Thailand
Authors: Patcharaporn Sakulpong, Wiriya Phokhwang
Abstract:
Background: Banpunrug (Love Sharing House) established in 2013 provides a community-based palliative care for patients with cancer from 7 provinces in southern Thailand. These patients come to receive outpatient chemotherapy and radiotherapy at Suratthani Cancer Hospital. They are poor and uneducated; they need an accommodation during their 30-45 day course of therapy. Methods: A community-participatory action research (PAR) was employed to establish a model of palliative care for patients with cancer. The participants included health care providers, community, and patients and families. The PAR process includes problem identification and need assessment, community and team establishment, field survey, organization founding, model of care planning, action and inquiry (PDCA), outcome evaluation, and model distribution. Results: The model of care at Banpunrug involves the concepts of HHHS model, in that Banpunrug is a Home for patients; patients live in a house comfortable like in a Hotel resource; the patients are given care and living facilities similarly to those in a Hospital; the house is a School for patients to learn how to take care themselves, how to live well with cancer, and most importantly how to prepare themselves for a good death. The house is also a humanized care school for health care providers. Banpunrug’s philosophy of care is based on friendship therapy, social and spiritual support, community partnership, patient-family centeredness, Live & Love sharing house, and holistic and humanized care. With this philosophy, the house is managed as a home of the patients and everyone involved; everything is costless for all eligible patients and their family members; all facilities and living expense are donated from benevolent people, friends, and community. Everyone, including patients and family, has a sense of belonging to the house and there is no authority between health care providers and the patients in the house. The house is situated in a temple and a community and supported by many local nonprofit organizations and healthcare facilities such as a health promotion hospital at sub-disctrict level and Suratthani Cancer Hospital. Village health volunteers and multi-professional health care volunteers have contributed not only appropriate care, but also knowledge and experience to develop a distinguishing HHHS community-based palliative care model for patients with cancer. Since its opening the house has been a home for more than 400 patients and 300 family members. It is also a model for many national and international healthcare organizations and providers, who come to visit and learn about palliative care in and by community. Conclusions: The success of this palliative care model comes from community involvement, multi-professional volunteers and distributions, and concepts of HHHS model. Banpunrug promotes a consistent care across the cancer trajectory independent of prognosis in order to strengthen a full integration of palliativeKeywords: community-based palliative care, model, participatory action research, patients with cancer
Procedia PDF Downloads 269446 The Church of San Paolo in Ferrara, Restoration and Accessibility
Authors: Benedetta Caglioti
Abstract:
The ecclesiastical complex of San Paolo in Ferrara represents a monument of great historical, religious and architectural importance. Its long and articulated story, over time, is already manifested by the mere reading of its planimetric and altimetric configuration, apparently unitary but, in reality, marked by modifications and repeated additions, even of high quality. It follows, in terms of protection, restoration and enhancement, a commitment of due respect for how the ancient building was built and enriched over its centuries of life. Hence a rigorous methodological approach, while being aware of the fact that every monument, in order to live and make use of the indispensable maintenance, must always be enjoyed and visited, therefore it must enjoy, in the right measure and compatibly with its nature, the possibility of improvements and functional, distributive, technological adjustments and related to the safety of people and things. The methodological approach substantiates the different elements of the project (such as distribution functionality, safety, structural solidity, environmental comfort, the character of the site, building and urban planning regulations, financial resources and materials, the same organization methods of the construction site) through the guiding principles of restoration, defined for a long time: the 'minimum intervention,' the 'recognisability' or 'distinguishability' of old and new, the Physico-chemical and figurative 'compatibility,' the 'durability' and the, at least potential, 'reversibility' of what is done, leading to the definition of appropriate "critical choices." The project tackles, together with the strictly functional ones, also the directly conservative and restoration issues, of a static, structural and material technology nature, with special attention to precious architectural surfaces, In order to ensure the best architectural quality through conscious enhancement, the project involves a redistribution of the interior and service spaces, an accurate lighting system inside and outside the church and a reorganization of the adjacent urban space. The reorganization of the interior is designed with particular attention to the issue of accessibility for people with disabilities. To accompany the community to regain possession of the use of the church's own space, already in its construction phase, the project proposal has hypothesized a permeability and flexibility in the management of the works such as to allow the perception of the found Monument to gradually become more and more familiar at the citizenship. Once the interventions have been completed, it is expected that the Church of San Paolo, second in importance only to the Cathedral, from which it is a few steps away, will be inserted in an already existing circuit of use of the city which over the years has systematized the different aspects of culture, the environment and tourism for the creation of greater awareness in the perception of what Ferrara can offer in cultural terms.Keywords: conservation, accessibility, regeneration, urban space
Procedia PDF Downloads 110445 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka
Authors: W. A. Nalaka Wijesooriya
Abstract:
The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.Keywords: conduct, performance, structure (SCP), rice millers
Procedia PDF Downloads 330444 Ex-vivo Bio-distribution Studies of a Potential Lung Perfusion Agent
Authors: Shabnam Sarwar, Franck Lacoeuille, Nadia Withofs, Roland Hustinx
Abstract:
After the development of a potential surrogate of MAA, and its successful application for the diagnosis of pulmonary embolism in artificially embolized rats’ lungs, this microparticulate system were radiolabelled with gallium-68 to synthesize 68Ga-SBMP with high radiochemical purity >99%. As a prerequisite step of clinical trials, 68Ga- labelled starch based microparticles (SBMP) were analysed for their in-vivo behavior in small animals. The purpose of the presented work includes the ex-vivo biodistribution studies of 68Ga-SBMP in order to assess the activity uptake in target organs with respect to time, excretion pathways of the radiopharmaceutical, %ID/g in major organs, T/NT ratios, in-vivo stability of the radiotracer and subsequently the microparticles in the target organs. Radiolabelling of starch based microparticles was performed by incubating it with 68Ga generator eluate (430±26 MBq) at room temperature and pressure without using any harsh reaction condition. For Ex-vivo biodistribution studies healthy White Wistar rats weighing between 345-460 g were injected intravenously 68Ga-SBMP 20±8 MBq, containing about 2,00,000-6,00,000 SBMP particles in a volume of 700µL. The rats were euthanized at predefined time intervals (5min, 30min, 60min and 120min) and their organ parts were cut, washed, and put in the pre-weighed tubes and measured for radioactivity counts through automatic Gamma counter. The 68Ga-SBMP produced >99% RCP just after 10-20 min incubation through a simple and robust procedure. Biodistribution of 68Ga-SBMP showed that initially just after 5 min post injection major uptake was observed in the lungs following by blood, heart, liver, kidneys, bladder, urine, spleen, stomach, small intestine, colon, skin and skeleton, thymus and at last the smallest activity was found in brain. Radioactivity counts stayed stable in lungs with gradual decrease with the passage of time, and after 2h post injection, almost half of the activity were seen in lungs. This is a sufficient time to perform PET/CT lungs scanning in humans while activity in the liver, spleen, gut and urinary system decreased with time. The results showed that urinary system is the excretion pathways instead of hepatobiliary excretion. There was a high value of T/NT ratios which suggest fine tune images for PET/CT lung perfusion studies henceforth further pre-clinical studies and then clinical trials should be planned in order to utilize this potential lung perfusion agent.Keywords: starch based microparticles, gallium-68, biodistribution, target organs, excretion pathways
Procedia PDF Downloads 177443 Conserving Naubad Karez Cultural Landscape – a Multi-Criteria Approach to Urban Planning
Authors: Valliyil Govindankutty
Abstract:
Human civilizations across the globe stand testimony to water being one of the major interaction points with nature. The interactions with nature especially in drier areas revolve around water, be it harnessing, transporting, usage and management. Many ingenious ideas were born, nurtured and developed for harnessing, transporting, storing and distributing water through the areas in the drier parts of the world. Many methods of water extraction, collection and management could be found throughout the world, some of which are associated with efficient, sustained use of surface water, ground water and rain water. Karez is one such ingenious method of collection, transportation, storage and distribution of ground water. Most of the Karez systems in India were developed during reign of Muslim dynasties with ruling class descending from Persia or having influential connections and inviting expert engineers from there. Karez have strongly influenced the village socio-economic organisations due to multitude of uses they were brought into. These are masterpiece engineering structures to collect groundwater and direct it, through a subsurface gallery with a gradual slope, to surface canals that provide water to settlements and agricultural fields. This ingenious technology, karez was result of need for harnessing groundwater in arid areas like that of Bidar. The study views this traditional technology in historical perspective linked to sustainable utilization and management of groundwater and above all the immediate environment. The karez system is one of the best available demonstration of human ingenuity and adaptability to situations and locations of water scarcity. Bidar, capital of erstwhile Bahmani sultanate with a history of more than 700 years or more is one of the heritage cities of present Karnataka State. The unique water systems of Bidar along with other historic entities have been listed under World Heritage Watch List by World Monument Fund. The Historical or cultural landscape in Bidar is very closely associated to the natural resources of the region, Karez systems being one of the best examples. The Karez systems were the lifeline of Bidar’s historical period providing potable water, fulfilling domestic and irrigation needs, both within and outside the fort enclosures. These systems are still functional, but under great pressure and threat of rapid and unplanned urbanisation. The change in land use and fragmentation of land are already paving way for irreversible modification of the karez cultural and geographic landscape. The Paper discusses the significance of character defining elements of Naubad Karez Landscape, highlights the importance of conserving cultural heritage and presents a geographical approach to its revival.Keywords: Karez, groundwater, traditional water harvesting, cultural heritage landscape, urban planning
Procedia PDF Downloads 494442 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 91441 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging
Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui
Abstract:
Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture
Procedia PDF Downloads 329440 Deasphalting of Crude Oil by Extraction Method
Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov
Abstract:
The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy
Procedia PDF Downloads 242439 Association between TNF-α and Its Receptor TNFRSF1B Polymorphism with Pulmonary Tuberculosis in Tomsk, Russia Federation
Authors: K. A. Gladkova, N. P. Babushkina, E. Y. Bragina
Abstract:
Purpose: Tuberculosis (TB), caused by Mycobacterium tuberculosis, is one of the major public health problems worldwide. It is clear that the immune response to M. tuberculosis infection is a relationship between inflammatory and anti-inflammatory responses in which Tumour Necrosis Factor-α (TNF-α) plays key roles as a pro-inflammatory cytokine. TNF-α involved in various cell immune responses via binding to its two types of membrane-bound receptors, TNFRSF1A and TNFRSF1B. Importantly, some variants of the TNFRSF1B gene have been considered as possible markers of host susceptibility to TB. However, the possible impact of such TNF-α and its receptor genes polymorphism on TB cases in Tomsk is missing. Thus, the purpose of our study was to investigate polymorphism of TNF-α (rs1800629) and its receptor TNFRSF1B (rs652625 and rs525891) genes in population of Tomsk and to evaluate their possible association with the development of pulmonary TB. Materials and Methods: The population distribution features of genes polymorphisms were investigated and made case-control study based on group of people from Tomsk. Human blood was collected during routine patients examination at Tomsk Regional TB Dispensary. Altogether, 234 TB-positive patients (80 women, 154 men, average age is 28 years old) and 205 health-controls (153 women, 52 men, average age is 47 years old) were investigated. DNA was extracted from blood plasma by phenol-chloroform method. Genotyping was carried out by a single-nucleotide-specific real-time PCR assay. Results: First, interpopulational comparison was carried out between healthy individuals from Tomsk and available data from the 1000 Genomes project. It was found that polymorphism rs1800629 region demonstrated that Tomsk population was significantly different from Japanese (P = 0.0007), but it was similar with the following Europeans subpopulations: Italians (P = 0.052), Finns (P = 0.124) and British (P = 0.910). Polymorphism rs525891 clear demonstrated that group from Tomsk was significantly different from population of South Africa (P = 0.019). However, rs652625 demonstrated significant differences from Asian population: Chinese (P = 0.03) and Japanese (P = 0.004). Next, we have compared healthy individuals versus patients with TB. It was detected that no association between rs1800629, rs652625 polymorphisms, and positive TB cases. Importantly, AT genotype of polymorphism rs525891 was significantly associated with resistance to TB (odds ratio (OR) = 0.61; 95% confidence interval (CI): 0.41-0.9; P < 0.05). Conclusion: To the best of our knowledge, the polymorphism of TNFRSF1B (rs525891) was associated with TB, while genotype AT is protective [OR = 0.61] in Tomsk population. In contrast, no significant correlation was detected between polymorphism TNF-α (rs1800629) and TNFRSF1B (rs652625) genes and alveolar TB cases among population of Tomsk. In conclusion, our data expands the molecular particularities associated with TB. The study was supported by the grant of the Russia for Basic Research #15-04-05852.Keywords: polymorphism, tuberculosis, TNF-α, TNFRSF1B gene
Procedia PDF Downloads 181438 Peculiarities of Snow Cover in Belarus
Authors: Aleh Meshyk, Anastasiya Vouchak
Abstract:
On the average snow covers Belarus for 75 days in the south-west and 125 days in the north-east. During the cold season snowpack often destroys due to thaws, especially at the beginning and end of winter. Over 50% of thawing days have a positive mean daily temperature, which results in complete snow melting. For instance, in December 10% of thaws occur at 4 С mean daily temperature. Stable snowpack lying for over a month forms in the north-east in the first decade of December but in the south-west in the third decade of December. The cover disappears in March: in the north-east in the last decade but in the south-west in the first decade. This research takes into account that precipitation falling during a cold season could be not only liquid and solid but also a mixed type (about 10-15 % a year). Another important feature of snow cover is its density. In Belarus, the density of freshly fallen snow ranges from 0.08-0.12 g/cm³ in the north-east to 0.12-0.17 g/cm³ in the south-west. Over time, snow settles under its weight and after melting and refreezing. Averaged annual density of snow at the end of January is 0.23-0.28 g/сm³, in February – 0.25-0.30 g/сm³, in March – 0.29-0.36 g/сm³. Sometimes it can be over 0.50 g/сm³ if the snow melts too fast. The density of melting snow saturated with water can reach 0.80 g/сm³. Average maximum of snow depth is 15-33 cm: minimum is in Brest, maximum is in Lyntupy. Maximum registered snow depth ranges within 40-72 cm. The water content in snowpack, as well as its depth and density, reaches its maximum in the second half of February – beginning of March. Spatial distribution of the amount of liquid in snow corresponds to the trend described above, i.e. it increases in the direction from south-west to north-east and on the highlands. Average annual value of maximum water content in snow ranges from 35 mm in the south-west to 80-100 mm in the north-east. The water content in snow is over 80 mm on the central Belarusian highland. In certain years it exceeds 2-3 times the average annual values. Moderate water content in snow (80-95 mm) is characteristic of western highlands. Maximum water content in snow varies over the country from 107 mm (Brest) to 207 mm (Novogrudok). Maximum water content in snow varies significantly in time (in years), which is confirmed by high variation coefficient (Cv). Maximums (0.62-0.69) are in the south and south-west of Belarus. Minimums (0.42-0.46) are in central and north-eastern Belarus where snow cover is more stable. Since 1987 most gauge stations in Belarus have observed a trend to a decrease in water content in snow. It is confirmed by the research. The biggest snow cover forms on the highlands in central and north-eastern Belarus. Novogrudok, Minsk, Volkovysk, and Sventayny highlands are a natural orographic barrier which prevents snow-bringing air masses from penetrating inside the country. The research is based on data from gauge stations in Belarus registered from 1944 to 2014.Keywords: density, depth, snow, water content in snow
Procedia PDF Downloads 161437 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 207436 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 123435 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27
Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer
Abstract:
At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis
Procedia PDF Downloads 187434 Spatial Distribution of Land Use in the North Canal of Beijing Subsidiary Center and Its Impact on the Water Quality
Authors: Alisa Salimova, Jiane Zuo, Christopher Homer
Abstract:
The objective of this study is to analyse the North Canal riparian zone land use with the help of remote sensing analysis in ArcGis using 30 cloudless Landsat8 open-source satellite images from May to August of 2013 and 2017. Land cover, urban construction, heat island effect, vegetation cover, and water system change were chosen as the main parameters and further analysed to evaluate its impact on the North Canal water quality. The methodology involved the following steps: firstly, 30 cloudless satellite images were collected from the Landsat TM image open-source database. The visual interpretation method was used to determine different land types in a catchment area. After primary and secondary classification, 28 land cover types in total were classified. Visual interpretation method was used with the help ArcGIS for the grassland monitoring, US Landsat TM remote sensing image processing with a resolution of 30 meters was used to analyse the vegetation cover. The water system was analysed using the visual interpretation method on the GIS software platform to decode the target area, water use and coverage. Monthly measurements of water temperature, pH, BOD, COD, ammonia nitrogen, total nitrogen and total phosphorus in 2013 and 2017 were taken from three locations of the North Canal in Tongzhou district. These parameters were used for water quality index calculation and compared to land-use changes. The results of this research were promising. The vegetation coverage of North Canal riparian zone in 2017 was higher than the vegetation coverage in 2013. The surface brightness temperature value was positively correlated with the vegetation coverage density and the distance from the surface of the water bodies. This indicates that the vegetation coverage and water system have a great effect on temperature regulation and urban heat island effect. Surface temperature in 2017 was higher than in 2013, indicating a global warming effect. The water volume in the river area has been partially reduced, indicating the potential water scarcity risk in North Canal watershed. Between 2013 and 2017, urban residential, industrial and mining storage land areas significantly increased compared to other land use types; however, water quality has significantly improved in 2017 compared to 2013. This observation indicates that the Tongzhou Water Restoration Plan showed positive results and water management of Tongzhou district had been improved.Keywords: North Canal, land use, riparian vegetation, river ecology, remote sensing
Procedia PDF Downloads 115433 Fabrication of Aluminum Nitride Thick Layers by Modified Reactive Plasma Spraying
Authors: Cécile Dufloux, Klaus Böttcher, Heike Oppermann, Jürgen Wollweber
Abstract:
Hexagonal aluminum nitride (AlN) is a promising candidate for several wide band gap semiconductor compound applications such as deep UV light emitting diodes (UVC LED) and fast power transistors (HEMTs). To date, bulk AlN single crystals are still commonly grown from the physical vapor transport (PVT). Single crystalline AlN wafers obtained from this process could offer suitable substrates for a defect-free growth of ultimately active AlGaN layers, however, these wafers still lack from small sizes, limited delivery quantities and high prices so far.Although there is already an increasing interest in the commercial availability of AlN wafers, comparatively cheap Si, SiC or sapphire are still predominantly used as substrate material for the deposition of active AlGaN layers. Nevertheless, due to a lattice mismatch up to 20%, the obtained material shows high defect densities and is, therefore, less suitable for high power devices as described above. Therefore, the use of AlN with specially adapted properties for optical and sensor applications could be promising for mass market products which seem to fulfill fewer requirements. To respond to the demand of suitable AlN target material for the growth of AlGaN layers, we have designed an innovative technology based on reactive plasma spraying. The goal is to produce coarse grained AlN boules with N-terminated columnar structure and high purity. In this process, aluminum is injected into a microwave stimulated nitrogen plasma. AlN, as the product of the reaction between aluminum powder and the plasma activated N2, is deposited onto the target. We used an aluminum filament as the initial material to minimize oxygen contamination during the process. The material was guided through the nitrogen plasma so that the mass turnover was 10g/h. To avoid any impurity contamination by an erosion of the electrodes, an electrode-less discharge was used for the plasma ignition. The pressure was maintained at 600-700 mbar, so the plasma reached a temperature high enough to vaporize the aluminum which subsequently was reacting with the surrounding plasma. The obtained products consist of thick polycrystalline AlN layers with a diameter of 2-3 cm. The crystallinity was determined by X-ray crystallography. The grain structure was systematically investigated by optical and scanning electron microscopy. Furthermore, we performed a Raman spectroscopy to provide evidence of stress in the layers. This paper will discuss the effects of process parameters such as microwave power and deposition geometry (specimen holder, radiation shields, ...) on the topography, crystallinity, and stress distribution of AlN.Keywords: aluminum nitride, polycrystal, reactive plasma spraying, semiconductor
Procedia PDF Downloads 281432 Placement Characteristics of Major Stream Vehicular Traffic at Median Openings
Authors: Tathagatha Khan, Smruti Sourava Mohapatra
Abstract:
Median openings are provided in raised median of multilane roads to facilitate U-turn movement. The U-turn movement is a highly complex and risky maneuver because U-turning vehicle (minor stream) makes 180° turns at median openings and merge with the approaching through traffic (major stream). A U-turning vehicle requires a suitable gap in the major stream to merge, and during this process, the possibility of merging conflict develops. Therefore, these median openings are potential hot spot of conflict and posses concern pertaining to safety. The traffic at the median openings could be managed efficiently with enhanced safety when the capacity of a traffic facility has been estimated correctly. The capacity of U-turns at median openings is estimated by Harder’s formula, which requires three basic parameters namely critical gap, follow up time and conflict flow rate. The estimation of conflicting flow rate under mixed traffic condition is very much complicated due to absence of lane discipline and discourteous behavior of the drivers. The understanding of placement of major stream vehicles at median opening is very much important for the estimation of conflicting traffic faced by U-turning movement. The placement data of major stream vehicles at different section in 4-lane and 6-lane divided multilane roads were collected. All the test sections were free from the effect of intersection, bus stop, parked vehicles, curvature, pedestrian movements or any other side friction. For the purpose of analysis, all the vehicles were divided into 6 categories such as motorized 2W, autorickshaw (3-W), small car, big car, light commercial vehicle, and heavy vehicle. For the collection of placement data of major stream vehicles, the entire road width was divided into sections of 25 cm each and these were numbered seriatim from the pavement edge (curbside) to the end of the road. The placement major stream vehicle crossing the reference line was recorded by video graphic technique on various weekdays. The collected data for individual category of vehicles at all the test sections were converted into a frequency table with a class interval of 25 cm each and the placement frequency curve. Separate distribution fittings were tried for 4- lane and 6-lane divided roads. The variation of major stream traffic volume on the placement characteristics of major stream vehicles has also been explored. The findings of this study will be helpful to determine the conflict volume at the median openings. So, the present work holds significance in traffic planning, operation and design to alleviate the bottleneck, prospect of collision and delay at median opening in general and at median opening in developing countries in particular.Keywords: median opening, U-turn, conflicting traffic, placement, mixed traffic
Procedia PDF Downloads 139431 The Spatial Circuit of the Audiovisual Industry in Argentina: From Monopoly and Geographic Concentration to New Regionalization and Democratization Policies
Authors: André Pasti
Abstract:
Historically, the communication sector in Argentina is characterized by intense monopolization and geographical concentration in the city of Buenos Aires. In 2000, the four major media conglomerates in operation – Clarín, Telefónica, America and Hadad – controlled 84% of the national media market. By 2009, new policies were implemented as a result of civil society organizations demands. Legally, a new regulatory framework was approved: the law 26,522 of Audiovisual Communications Services. Supposedly, these policies intend to create new conditions for the development of the audiovisual economy in the territory of Argentina. The regionalization of audiovisual production and the democratization of channels and access to media were among the priorities. This paper analyses the main changes and continuities in the organization of the spatial circuit of the audiovisual industry in Argentina provoked by these new policies. These new policies aim at increasing the diversity of audiovisual producers and promoting regional audiovisual industries. For this purpose, a national program for the development of audiovisual centers within the country was created. This program fostered a federalized production network, based on nine audiovisual regions and 40 nodes. Each node has created technical, financial and organizational conditions to gather different actors in audiovisual production – such as SMEs, social movements and local associations. The expansion of access to technical networks was also a concern of other policies, such as ‘Argentina connected’, whose objective was to expand access to broadband Internet. The Open Digital Television network also received considerable investments. Furthermore, measures have been carried out in order to impose limits on the concentration of ownership as well as to eliminate the oligopolies and to ensure more competition in the sector. These actions intended to force a divide of the media conglomerates into smaller groups. Nevertheless, the corporations that compose these conglomerates resist strongly, making full use of their economic and judiciary power. Indeed, the absence of effective impact of such measures can be testified by the fact that the audiovisual industry remains strongly concentrated in Argentina. Overall, these new policies were designed properly to decentralize audiovisual production and expand the regional diversity of the audiovisual industry. However, the effective transformation of the organization of the audiovisual circuit in the territory faced several resistances. This can be explained firstly and foremost by the ideological and economic power of the media conglomerates. In the second place, there is an inherited inertia from the unequal distribution of the objects needed for the audiovisual production and consumption. Lastly, the resistance also relies on financial needs and in the excessive dependence of the state for the promotion of regional audiovisual production.Keywords: Argentina, audiovisual industry, communication policies, geographic concentration, regionalization, spatial circuit
Procedia PDF Downloads 217430 Chemical Technology Approach for Obtaining Carbon Structures Containing Reinforced Ceramic Materials Based on Alumina
Authors: T. Kuchukhidze, N. Jalagonia, T. Archuadze, G. Bokuchava
Abstract:
The growing scientific-technological progress in modern civilization causes actuality of producing construction materials which can successfully work in conditions of high temperature, radiation, pressure, speed, and chemically aggressive environment. Such extreme conditions can withstand very few types of materials and among them, ceramic materials are in the first place. Corundum ceramics is the most useful material for creation of constructive nodes and products of various purposes for its low cost, easy accessibility to raw materials and good combination of physical-chemical properties. However, ceramic composite materials have one disadvantage; they are less plastics and have lower toughness. In order to increase the plasticity, the ceramics are reinforced by various dopants, that reduces the growth of the cracks. It is shown, that adding of even small amount of carbon fibers and carbon nanotubes (CNT) as reinforcing material significantly improves mechanical properties of the products, keeping at the same time advantages of alundum ceramics. Graphene in composite material acts in the same way as inorganic dopants (MgO, ZrO2, SiC and others) and performs the role of aluminum oxide inhibitor, as it creates shell, that gives possibility to reduce sintering temperature and at the same time it acts as damper, because scattering of a shock wave takes place on carbon structures. Application of different structural modification of carbon (graphene, nanotube and others) as reinforced material, gives possibility to create multi-purpose highly requested composite materials based on alundum ceramics. In the present work offers simplified technology for obtaining of aluminum oxide ceramics, reinforced with carbon nanostructures, during which chemical modification with doping carbon nanostructures will be implemented in the process of synthesis of final powdery composite – Alumina. In charge doping carbon nanostructures connected to matrix substance with C-O-Al bonds, that provide their homogeneous spatial distribution. In ceramic obtained as a result of consolidation of such powders carbon fragments equally distributed in the entire matrix of aluminum oxide, that cause increase of bending strength and crack-resistance. The proposed way to prepare the charge simplifies the technological process, decreases energy consumption, synthesis duration and therefore requires less financial expenses. In the implementation of this work, modern instrumental methods were used: electronic and optical microscopy, X-ray structural and granulometric analysis, UV, IR, and Raman spectroscopy.Keywords: ceramic materials, α-Al₂O₃, carbon nanostructures, composites, characterization, hot-pressing
Procedia PDF Downloads 121429 CO₂ Recovery from Biogas and Successful Upgrading to Food-Grade Quality: A Case Study
Authors: Elisa Esposito, Johannes C. Jansen, Loredana Dellamuzia, Ugo Moretti, Lidietta Giorno
Abstract:
The reduction of CO₂ emission into the atmosphere as a result of human activity is one of the most important environmental challenges to face in the next decennia. Emission of CO₂, related to the use of fossil fuels, is believed to be one of the main causes of global warming and climate change. In this scenario, the production of biomethane from organic waste, as a renewable energy source, is one of the most promising strategies to reduce fossil fuel consumption and greenhouse gas emission. Unfortunately, biogas upgrading still produces the greenhouse gas CO₂ as a waste product. Therefore, this work presents a case study on biogas upgrading, aimed at the simultaneous purification of methane and CO₂ via different steps, including CO₂/methane separation by polymeric membranes. The original objective of the project was the biogas upgrading to distribution grid quality methane, but the innovative aspect of this case study is the further purification of the captured CO₂, transforming it from a useless by-product to a pure gas with food-grade quality, suitable for commercial application in the food and beverage industry. The study was performed on a pilot plant constructed by Tecno Project Industriale Srl (TPI) Italy. This is a model of one of the largest biogas production and purification plants. The full-scale anaerobic digestion plant (Montello Spa, North Italy), has a digestive capacity of 400.000 ton of biomass/year and can treat 6.250 m3/hour of biogas from FORSU (organic fraction of solid urban waste). The entire upgrading process consists of a number of purifications steps: 1. Dehydration of the raw biogas by condensation. 2. Removal of trace impurities such as H₂S via absorption. 3.Separation of CO₂ and methane via a membrane separation process. 4. Removal of trace impurities from CO₂. The gas separation with polymeric membranes guarantees complete simultaneous removal of microorganisms. The chemical purity of the different process streams was analysed by a certified laboratory and was compared with the guidelines of the European Industrial Gases Association and the International Society of Beverage Technologists (EIGA/ISBT) for CO₂ used in the food industry. The microbiological purity was compared with the limit values defined in the European Collaborative Action. With a purity of 96-99 vol%, the purified methane respects the legal requirements for the household network. At the same time, the CO₂ reaches a purity of > 98.1% before, and 99.9% after the final distillation process. According to the EIGA/ISBT guidelines, the CO₂ proves to be chemically and microbiologically sufficiently pure to be suitable for food-grade applications.Keywords: biogas, CO₂ separation, CO2 utilization, CO₂ food grade
Procedia PDF Downloads 212428 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India
Authors: Bharti Singh, Shri K. Singh
Abstract:
Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.Keywords: child nutrition, India, NFHS, women’s empowerment
Procedia PDF Downloads 34