Search results for: optimal solution
788 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 246787 Achieving Household Electricity Saving Potential Through Behavioral Change
Authors: Lusi Susanti, Prima Fithri
Abstract:
The rapid growth of Indonesia population is directly proportional to the energy needs of the country, but not all of Indonesian population can relish the electricity. Indonesia's electrification ratio is still around 80.1%, which means that approximately 19.9% of households in Indonesia have not been getting the flow of electrical energy. Household electricity consumptions in Indonesia are generally still dominated by the public urban. In the city of Padang, West Sumatera, Indonesia, about 94.10% are power users of government services (PLN). The most important thing of the issue is human resources efficient energy. User behavior in utilizing electricity becomes significant. However repair solution will impact the user's habits sustainable energy issues. This study attempts to identify the user behavior and lifestyle that affect household electricity consumption and to evaluate the potential for energy saving. The behavior component is frequently underestimated or ignored in analyses of household electrical energy end use, partly because of its complexity. It is influenced by socio-demographic factors, culture, attitudes, aesthetic norms and comfort, as well as social and economic variables. Intensive questioner survey, in-depth interview and statistical analysis are carried out to collect scientific evidences of the behavioral based changes instruments to reduce electricity consumption in household sector. The questioner was developed to include five factors assuming affect the electricity consumption pattern in household sector. They are: attitude, energy price, household income, knowledge and other determinants. The survey was carried out in Padang, West Sumatra Province Indonesia. About 210 questioner papers were proportionally distributed to households in 11 districts in Padang. Stratified sampling was used as a method to select respondents. The results show that the household size, income, payment methods and size of house are factors affecting electricity saving behavior in residential sector. Household expenses on electricity are strongly influenced by gender, type of job, level of education, size of house, income, payment method and level of installed power. These results provide a scientific evidence for stakeholders on the potential of controlling electricity consumption and designing energy policy by government in residential sector.Keywords: electricity, energy saving, household, behavior, policy
Procedia PDF Downloads 438786 Structural Characterization and Hot Deformation Behaviour of Al3Ni2/Al3Ni in-situ Core-shell intermetallic in Al-4Cu-Ni Composite
Authors: Ganesh V., Asit Kumar Khanra
Abstract:
An in-situ powder metallurgy technique was employed to create Ni-Al3Ni/Al3Ni2 core-shell-shaped aluminum-based intermetallic reinforced composites. The impact of Ni addition on the phase composition, microstructure, and mechanical characteristics of the Al-4Cu-xNi (x = 0, 2, 4, 6, 8, 10 wt.%) in relation to various sintering temperatures was investigated. Microstructure evolution was extensively examined using X-ray diffraction (XRD), scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX), and transmission electron microscopy (TEM) techniques. Initially, under sintering conditions, the formation of "Single Core-Shell" structures was observed, consisting of Ni as the core with Al3Ni2 intermetallic, whereas samples sintered at 620°C exhibited both "Single Core-Shell" and "Double Core-Shell" structures containing Al3Ni2 and Al3Ni intermetallics formed between the Al matrix and Ni reinforcements. The composite achieved a high compressive yield strength of 198.13 MPa and ultimate strength of 410.68 MPa, with 24% total elongation for the sample containing 10 wt.% Ni. Additionally, there was a substantial increase in hardness, reaching 124.21 HV, which is 2.4 times higher than that of the base aluminum. Nanoindentation studies showed hardness values of 1.54, 4.65, 21.01, 13.16, 5.52, 6.27, and 8.39GPa corresponding to α-Al matrix, Ni, Al3Ni2, Ni and Al3Ni2 interface, Al3Ni, and their respective interfaces. Even at 200°C, it retained 54% of its room temperature strength (90.51 MPa). To investigate the deformation behavior of the composite material, experiments were conducted at deformation temperatures ranging from 300°C to 500°C, with strain rates varying from 0.0001s-1 to 0.1s-1. A sine-hyperbolic constitutive equation was developed to characterize the flow stress of the composite, which exhibited a significantly higher hot deformation activation energy of 231.44 kJ/mol compared to the self-diffusion of pure aluminum. The formation of Al2Cu intermetallics at grain boundaries and Al3Ni2/Al3Ni within the matrix hindered dislocation movement, leading to an increase in activation energy, which might have an adverse effect on high-temperature applications. Two models, the Strain-compensated Arrhenius model and the Artificial Neural Network (ANN) model, were developed to predict the composite's flow behavior. The ANN model outperformed the Strain-compensated Arrhenius model with a lower average absolute relative error of 2.266%, a smaller root means square error of 1.2488 MPa, and a higher correlation coefficient of 0.9997. Processing maps revealed that the optimal hot working conditions for the composite were in the temperature range of 420-500°C and strain rates between 0.0001s-1 and 0.001s-1. The changes in the composite microstructure were successfully correlated with the theory of processing maps, considering temperature and strain rate conditions. The uneven distribution in the shape and size of Core-shell/Al3Ni intermetallic compounds influenced the flow stress curves, leading to Dynamic Recrystallization (DRX), followed by partial Dynamic Recovery (DRV), and ultimately strain hardening. This composite material shows promise for applications in the automobile and aerospace industries.Keywords: core-shell structure, hot deformation, intermetallic compounds, powder metallurgy
Procedia PDF Downloads 20785 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery
Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović
Abstract:
Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.Keywords: capsaicin, in vitro, patch, RP-LC, transdermal
Procedia PDF Downloads 227784 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 100783 Additional Opportunities of Forensic Medical Identification of Dead Bodies of Unkown Persons
Authors: Saule Mussabekova
Abstract:
A number of chemical elements widely presented in the nature is seldom met in people and vice versa. This is a peculiarity of accumulation of elements in the body, and their selective use regardless of widely changed parameters of external environment. Microelemental identification of human hair and particularly dead body is a new step in the development of modern forensic medicine which needs reliable criteria while identifying the person. In the condition of technology-related pressing of large industrial cities for many years and specific for each region multiple-factor toxic effect from many industrial enterprises it’s important to assess actuality and the role of researches of human hair while assessing degree of deposition with specific pollution. Hair is highly sensitive biological indicator and allows to assess ecological situation, to perform regionalism of large territories of geological and chemical methods. Besides, monitoring of concentrations of chemical elements in the regions of Kazakhstan gives opportunity to use these data while performing forensic medical identification of dead bodies of unknown persons. Methods based on identification of chemical composition of hair with further computer processing allowed to compare received data with average values for the sex, age, and to reveal causally significant deviations. It gives an opportunity preliminary to suppose the region of residence of the person, having concentrated actions of policy for search of people who are unaccounted for. It also allows to perform purposeful legal actions for its further identification having created more optimal and strictly individual scheme of personal identity. Hair is the most suitable material for forensic researches as it has such advances as long term storage properties with no time limitations and specific equipment. Besides, quantitative analysis of micro elements is well correlated with level of pollution of the environment, reflects professional diseases and with pinpoint accuracy helps not only to diagnose region of temporary residence of the person but to establish regions of his migration as well. Peculiarities of elemental composition of human hair have been established regardless of age and sex of persons residing on definite territories of Kazakhstan. Data regarding average content of 29 chemical elements in hair of population in different regions of Kazakhstan have been systemized. Coefficients of concentration of studies elements in hair relative to average values around the region have been calculated for each region. Groups of regions with specific spectrum of elements have been emphasized; these elements are accumulated in hair in quantities exceeding average indexes. Our results have showed significant differences in concentrations of chemical elements for studies groups and showed that population of Kazakhstan is exposed to different toxic substances. It depends on emissions to atmosphere from industrial enterprises dominating in each separate region. Performed researches have showed that obtained elemental composition of human hair residing in different regions of Kazakhstan reflects technogenic spectrum of elements.Keywords: analysis of elemental composition of hair, forensic medical research of hair, identification of unknown dead bodies, microelements
Procedia PDF Downloads 142782 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh
Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter
Abstract:
Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.Keywords: land cover change, land surface temperature, normalized difference vegetation index, urban heat island
Procedia PDF Downloads 272781 Inhibitory Action of Fatty Acid Salts against Cladosporium cladosporioides and Dermatophagoides farinae
Authors: Yui Okuno, Mariko Era, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita
Abstract:
Introduction: Fungus and mite are known as allergens that cause an allergic disease for example asthma bronchiale and allergic rhinitis. Cladosporium cladosporioides is one of the most often detected fungi in the indoor environment and causes pollution and deterioration. Dermatophagoides farinae is major mite allergens indoors. Therefore, the creation of antifungal agents with high safety and the antifungal effect is required. Fatty acid salts are known that have antibacterial activities. This report describes the effects of fatty acid salts against Cladosporium cladosporioides NBRC 30314 and Dermatophagoides farinae. Methods: Potassium salts of 9 fatty acids (C4:0, C6:0, C8:0, C10:0, C12:0, C14:0, C18:1, C18:2, C18:3) were prepared by mixing the fatty acid with the appropriate amount of KOH solution to a concentration of 175 mM and pH 10.5. The antifungal method, the spore suspension (3.0×104 spores/mL) was mixed with a sample of fatty acid potassium (final concentration of 175 mM). Samples were counted at 0, 10, 60, 180 min by plating (100 µL) on PDA. Fungal colonies were counted after incubation for 3 days at 30 °C. The MIC (minimum inhibitory concentration) against the fungi was determined by the two-fold dilution method. Each fatty acid salts were inoculated separately with 400 µL of C. cladosporioides at 3.0 × 104 spores/mL. The mixtures were incubated at the respective temperature for each organism for 10 min. The tubes were then contacted with the fungi incubated at 30 °C for 7 days and examined for growth of spores on PDA. The acaricidal method, twenty D. farinae adult females were used and each adult was covered completely with 2 µL fatty acid potassium for 1 min. The adults were then dried with filter paper. The filter paper was folded and fixed by two clips and kept at 25 °C and 64 % RH. Mortalities were determained 48 h after treatment under the microscope. D. farina was considered to be dead if appendages did not move when prodded with a pin. Results and Conclusions: The results show that C8K, C10K, C12K, C14K was effective to decrease survival rate (4 log unit) of the fatty acids potassium incubated time for 10 min against C. cladosporioides. C18:3K was effective to decrease 4 log unit of the fatty acids potassium incubated time for 60 min. Especially, C12K was the highest antifungal activity and the MIC of C12K was 0.7 mM. On the other hand, the fatty acids potassium showed no acaricidal effects against D. farinae. The activity of D. farinae was not adversely affected after 48 hours. These results indicate that C12K has high antifungal activity against C. cladosporioides and suggest the fatty acid potassium will be used as an antifungal agent.Keywords: fatty acid salts, antifungal effects, acaricidal effects, Cladosporium cladosporioides, Dermatophagoides farinae
Procedia PDF Downloads 273780 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process
Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian
Abstract:
The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass
Procedia PDF Downloads 410779 Study of Oxidative Processes in Blood Serum in Patients with Arterial Hypertension
Authors: Laura M. Hovsepyan, Gayane S. Ghazaryan, Hasmik V. Zanginyan
Abstract:
Hypertension (HD) is the most common cardiovascular pathology that causes disability and mortality in the working population. Most often, heart failure (HF), which is based on myocardial remodeling, leads to death in hypertension. Recently, endothelial dysfunction (EDF) or a violation of the functional state of the vascular endothelium has been assigned a significant role in the structural changes in the myocardium and the occurrence of heart failure in patients with hypertension. It has now been established that tissues affected by inflammation form increased amounts of superoxide radical and NO, which play a significant role in the development and pathogenesis of various pathologies. They mediate inflammation, modify proteins and damage nucleic acids. The aim of this work was to study the processes of oxidative modification of proteins (OMP) and the production of nitric oxide in hypertension. In the experimental work, the blood of 30 donors and 33 patients with hypertension was used. For the quantitative determination of OMP products, the based on the reaction of the interaction of oxidized amino acid residues of proteins and 2,4-dinitrophenylhydrazine (DNPH) with the formation of 2,4-dinitrophenylhydrazones, the amount of which was determined spectrophotometrically. The optical density of the formed carbonyl derivatives of dinitrophenylhydrazones was recorded at different wavelengths: 356 nm - aliphatic ketone dinitrophenylhydrazones (KDNPH) of neutral character; 370 nm - aliphatic aldehyde dinirophenylhydrazones (ADNPH) of neutral character; 430 nm - aliphatic KDNFG of the main character; 530 nm - basic aliphatic ADNPH. Nitric oxide was determined by photometry using Grace's solution. Adsorption was measured on a Thermo Scientific Evolution 201 SF at a wavelength of 546 nm. Thus, the results of the studies showed that in patients with arterial hypertension, an increased level of nitric oxide in the blood serum is observed, and there is also a tendency to an increase in the intensity of oxidative modification of proteins at a wavelength of 270 nm and 363 nm, which indicates a statistically significant increase in aliphatic aldehyde and ketone dinitrophenylhydrazones. The increase in the intensity of oxidative modification of blood plasma proteins in the studied patients, revealed by us, actually reflects the general direction of free radical processes and, in particular, the oxidation of proteins throughout the body. A decrease in the activity of the antioxidant system also leads to a violation of protein metabolism. The most important consequence of the oxidative modification of proteins is the inactivation of enzymes.Keywords: hypertension (HD), oxidative modification of proteins (OMP), nitric oxide (NO), oxidative stress
Procedia PDF Downloads 108778 An AHP Study on The Migrant and Refugee Employees Occupational Health and Safety Issues in Turkey
Authors: Cengiz Akyildiz, Ismail Ekmekci
Abstract:
In the past 15 years, many people have sought refuge and emigrated to developed countries due to the civil war in Syria, terrorism and turmoil in Iraq, Iran and Afghanistan, hunger problems in Africa and the purpose of work. Many of these people came to Turkey. By the end of the 2019, in Turkey, regular and irregular migrants, asylum seekers and foreigners under international protection are about 6 million people. The majority of these people are Syrians. Approximately 2 800 000 immigrants and refugees are in the workforce. Migrant workers in our country constitute the largest proportion among all countries in the world according to the local labor force. 2.5 million of these employees, with a high rate of about 90%, work informally and do not have legal records and valid employment contracts as a workforce; They cannot benefit from Occupational Health and Safety (OHS) services. Migrant workers generally receive less wages than local workers, working longer hours and worse conditions; they are often subjected to human rights violations, harassment, human trafficking and violence. Migrant workers face problems such as OHS practices, environmental and occupational exposures, language / cultural barriers, access to health services, and lack of documentation. Therefore, the OHS problems of these employees are becoming an increasingly problematic area. However, there is not enough research, analysis and academic studies in this field. The order of importance should be known for the radical solution of the problems, because of the problems with high severity are also at high risk. In this study, for the first time, a Search Conference was held with the participation of 45 stakeholders to reveal the OHS problems of regular and irregular migrant workers in our country. The problems arising from this workshop were compared with the problems in the literature and the problems in this field were determined and weighted for our country. Later, to determine the significance levels of these problems, AHP study, which is a Multi Criteria Decision Making Method in which 15 experts participated, was conducted and the significance levels of these problems were determined. When the data obtained are evaluated, it has been seen that the OSH risks of migrant workers arise from 58% laws and government policies, 29% from employers, 13% from personal faults of employees. An academic study has been carried out for the first time in this field regarding the OHS problems of migrant workers, and an academic study has been created to guide which of the problems should be prioritized.Keywords: environmental conditions, migrant workers, OHS issues, workplace conditions
Procedia PDF Downloads 151777 Extension of the Simplified Theory of Plastic Zones for Analyzing Elastic Shakedown in a Multi-Dimensional Load Domain
Authors: Bastian Vollrath, Hartwig Hubel
Abstract:
In case of over-elastic and cyclic loading, strain may accumulate due to a ratcheting mechanism until the state of shakedown is possibly achieved. Load history dependent numerical investigations by a step-by-step analysis are rather costly in terms of engineering time and numerical effort. In the case of multi-parameter loading, where various independent loadings affect the final state of shakedown, the computational effort becomes an additional challenge. Therefore, direct methods like the Simplified Theory of Plastic Zones (STPZ) are developed to solve the problem with a few linear elastic analyses. Post-shakedown quantities such as strain ranges and cyclic accumulated strains are calculated approximately by disregarding the load history. The STPZ is based on estimates of a transformed internal variable, which can be used to perform modified elastic analyses, where the elastic material parameters are modified, and initial strains are applied as modified loading, resulting in residual stresses and strains. The STPZ already turned out to work well with respect to cyclic loading between two states of loading. Usually, few linear elastic analyses are sufficient to obtain a good approximation to the post-shakedown quantities. In a multi-dimensional load domain, the approximation of the transformed internal variable transforms from a plane problem into a hyperspace problem, where time-consuming approximation methods need to be applied. Therefore, a solution restricted to structures with four stress components was developed to estimate the transformed internal variable by means of three-dimensional vector algebra. This paper presents the extension to cyclic multi-parameter loading so that an unlimited number of load cases can be taken into account. The theoretical basis and basic presumptions of the Simplified Theory of Plastic Zones are outlined for the case of elastic shakedown. The extension of the method to many load cases is explained, and a workflow of the procedure is illustrated. An example, adopting the FE-implementation of the method into ANSYS and considering multilinear hardening is given which highlights the advantages of the method compared to incremental, step-by-step analysis.Keywords: cyclic loading, direct method, elastic shakedown, multi-parameter loading, STPZ
Procedia PDF Downloads 161776 Preparation and CO2 Permeation Properties of Carbonate-Ceramic Dual-Phase Membranes
Authors: H. Ishii, S. Araki, H. Yamamoto
Abstract:
In recent years, the carbon dioxide (CO2) separation technology is required in terms of the reduction of emission of global warming gases and the efficient use of fossil fuels. Since the emission amount of CO2 gas occupies the large part of greenhouse effect gases, it is considered that CO2 have the most influence on global warming. Therefore, we need to establish the CO2 separation technologies with high efficiency at low cost. In this study, we focused on the membrane separation compared with conventional separation technique such as distillation or cryogenic separation. In this study, we prepared carbonate-ceramic dual-phase membranes to separate CO2 at high temperature. As porous ceramic substrate, the (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+σ, La0.6Sr0.4Ti0.3 Fe0.7O3 and Ca0.8Sr0.2Ti0.7Fe0.3O3-α (PLNCG, LSTF and CSTF) were examined. PLNCG, LSTF and CSTF have the perovskite structure. The perovskite structure has high stability and shows ion-conducting doped by another metal ion. PLNCG, LSTF and CSTF have perovskite structure and has high stability and high oxygen ion diffusivity. PLNCG, LSTF and CSTF powders were prepared by a solid-phase process using the appropriate carbonates or oxides. To prepare porous substrates, these powders mixed with carbon black (20 wt%) and a few drops of polyvinyl alcohol (5 wt%) aqueous solution. The powder mixture were packed into stainless steel mold (13 mm) and uniaxially pressed into disk shape under a pressure of 20 MPa for 1 minute. PLNCG, LSTF and CSTF disks were calcined in air for 6 h at 1473, 1573 and 1473 K, respectively. The carbonate mixture (Li2CO3/Na2CO3/K2CO3: 42.5/32.5/25 in mole percent ratio) was placed inside a crucible and heated to 793 K. Porous substrates were infiltrated with the molten carbonate mixture at 793 K. Crystalline structures of the fresh membranes and after the infiltration with the molten carbonate mixtures were determined by X-ray diffraction (XRD) measurement. We confirmed the crystal structure of PLNCG and CSTF slightly changed after infiltration with the molten carbonate mixture. CO2 permeation experiments with PLNCG-carbonate, LSTF-carbonate and CSTF-carbonate membranes were carried out at 773-1173 K. The gas mixture of CO2 (20 mol%) and He was introduced at the flow rate of 50 ml/min to one side of membrane. The permeated CO2 was swept by N2 (50 ml/min). We confirmed the effect of ceramic materials and temperature on the CO2 permeation at high temperature.Keywords: membrane, perovskite structure, dual-phase, carbonate
Procedia PDF Downloads 367775 Concept of Using an Indicator to Describe the Quality of Fit of Clothing to the Body Using a 3D Scanner and CAD System
Authors: Monika Balach, Iwona Frydrych, Agnieszka Cichocka
Abstract:
The objective of this research is to develop an algorithm, taking into account material type and body type that will describe the fabric properties and quality of fit of a garment to the body. One of the objectives of this research is to develop a new algorithm to simulate cloth draping within CAD/CAM software. Existing virtual fitting does not accurately simulate fabric draping behaviour. Part of the research into virtual fitting will focus on the mechanical properties of fabrics. Material behaviour depends on many factors including fibre, yarn, manufacturing process, fabric weight, textile finish, etc. For this study, several different fabric types with very different mechanical properties will be selected and evaluated for all of the above fabric characteristics. These fabrics include woven thick cotton fabric which is stiff and non-bending, woven with elastic content, which is elastic and bends on the body. Within the virtual simulation, the following mechanical properties can be specified: shear, bending, weight, thickness, and friction. To help calculate these properties, the KES system (Kawabata) can be used. This system was originally developed to calculate the mechanical properties of fabric. In this research, the author will focus on three properties: bending, shear, and roughness. This study will consider current research using the KES system to understand and simulate fabric folding on the virtual body. Testing will help to determine which material properties have the largest impact on the fit of the garment. By developing an algorithm which factors in body type, material type, and clothing function, it will be possible to determine how a specific type of clothing made from a particular type of material will fit on a specific body shape and size. A fit indicator will display areas of stress on the garment such as shoulders, chest waist, hips. From this data, CAD/CAM software can be used to develop garments that fit with a very high degree of accuracy. This research, therefore, aims to provide an innovative solution for garment fitting which will aid in the manufacture of clothing. This research will help the clothing industry by cutting the cost of the clothing manufacturing process and also reduce the cost spent on fitting. The manufacturing process can be made more efficient by virtual fitting of the garment before the real clothing sample is made. Fitting software could be integrated into clothing retailer websites allowing customers to enter their biometric data and determine how the particular garment and material type would fit their body.Keywords: 3D scanning, fabric mechanical properties, quality of fit, virtual fitting
Procedia PDF Downloads 178774 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.Keywords: construction supply chain management, modeling, operations research, optimization, simulation
Procedia PDF Downloads 503773 Mindmax: Building and Testing a Digital Wellbeing Application for Australian Football Players
Authors: Jo Mitchell, Daniel Johnson
Abstract:
MindMax is a digital community and learning platform built to maximise the wellbeing and resilience of AFL Players and Australian men. The MindMax application engages men, via their existing connection with sport and video games, in a range of wellbeing ideas, stories and actions, because we believe fit minds, kick goals. MindMax is an AFL Players Association led project, supported by a Movember Foundation grant, to improve the mental health of Australian males aged between 16-35 years. The key engagement and delivery strategy for the project was digital technology, sport (AFL) and video games, underpinned by evidenced based wellbeing science. The project commenced April 2015, and the expected completion date is March 2017. This paper describes the conceptual model underpinning product development, including progress, key learnings and challenges, as well as the research agenda. Evaluation of the MindMax project is a multi-pronged approach of qualitative and quantitative methods, including participatory design workshops, online reference groups, longitudinal survey methods, a naturalistic efficacy trial and evaluation of the social and economic return on investment. MindMax is focused on the wellness pathway and maximising our mind's capacity for fitness by sharing and promoting evidence-based actions that support this. A range of these ideas (from ACT, mindfulness and positive psychology) are already being implemented in AFL programs and services, mostly in face-to-face formats, with strong engagement by players. Player's experience features strongly as part of the product content. Wellbeing science is a discipline of psychology that explores what helps individuals and communities to flourish in life. Rather than ask questions about illness and poor functioning, wellbeing scientists and practitioners ask questions about wellness and optimal functioning. While illness and wellness are related, they operate as separate constructs and as such can be influenced through different pathways. The essential idea was to take the evidence-based wellbeing science around building psychological fitness to the places and spaces that men already frequent, namely sport and video games. There are 800 current senior AFL players, 5000+ past players, and 11 million boys and men that are interested in the lives of AFL Players; what they think and do to be their best both on and off field. AFL Players are also keen video gamers – using games as one way to de-stress, connect and build wellbeing. There are 9.5 million active gamers in Australia with 93% of households having a device for playing games. Video games in MindMax will be used as an engagement and learning tool. Gamers (including AFL players) can also share their personal experience of how games help build their mental fitness. Currently available games (i.e., we are not in the game creation business) will also be used to motivate and connect MindMax participants. The MindMax model is built with replication by other sport codes (e.g., Cricket) in mind. It is intended to not only support our current crop of athletes but also the community that surrounds them, so they can maximise their capacity for health and wellbeing.Keywords: Australian football league, digital application, positive psychology, wellbeing
Procedia PDF Downloads 238772 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data
Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann
Abstract:
Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers
Procedia PDF Downloads 205771 Exploring Barriers to Social Innovation: Swedish Experiences from Nine Research Circles
Authors: Claes Gunnarsson, Karin Fröding, Nina Hasche
Abstract:
Innovation is a necessity for the evolution of societies and it is also a driving force in human life that leverages value creation among cross-sector participants in various network arrangements. Social innovations can be characterized as the creation and implementation of a new solution to a social problem, which is more effective and sustainable than existing solutions in terms of improvement of society’s conditions and in particular social inclusion processes. However, barriers exist which may restrict the potential of social innovations to live up to its promise as a societal welfare promoting driving force. The literature points at difficulties in tackling social problems primarily related to problem complexity, access to networks, and lack of financial muscles. Further research is warranted at detailed at detail clarification of these barriers, also connected to recognition of the interplay between institutional logics on the development of cross-sector collaborations in networks and the organizing processes to achieve innovation barrier break-through. There is also a need to further elaborate how obstacles that spur a difference between the actual and desired state of innovative value creating service systems can be overcome. The purpose of this paper is to illustrate barriers to social innovations, based on qualitative content analysis of 36 dialogue-based seminars (i.e. research circles) with nine Swedish focus groups including more than 90 individuals representing civil society organizations, private business, municipal offices, and politicians; and analyze patterns that reveal constituents of barriers to social innovations. The paper draws on central aspects of innovation barriers as discussed in the literature and analyze barriers basically related to internal/external and tangible/intangible characteristics. The findings of this study are that existing institutional structures highly influence the transformative potential of social innovations, as well as networking conditions in terms of building a competence-propelled strategy, which serves as an offspring for overcoming barriers of competence extension. Both theoretical and practical knowledge will contribute to how policy-makers and SI-practitioners can facilitate and support social innovation processes to be contextually adapted and implemented across areas and sectors.Keywords: barriers, research circles, social innovation, service systems
Procedia PDF Downloads 257770 Beyond the Flipped Classroom: A Tool to Promote Autonomy, Cooperation, Differentiation and the Pleasure of Learning
Authors: Gabriel Michel
Abstract:
The aim of our research is to find solutions for adapting university teaching to today's students and companies. To achieve this, we have tried to change the posture and behavior of those involved in the learning situation by promoting other skills. There is a gap between the expectations and functioning of students and university teaching. At the same time, the business world needs employees who are obviously competent and proficient in technology, but who are also imaginative, flexible, able to communicate, learn on their own and work in groups. These skills are rarely developed as a goal at university. The flipped classroom has been one solution. Thanks to digital tools such as Moodle, for example, but the model behind them is still centered on teachers and classic learning scenarios: it makes course materials available without really involving them and encouraging them to cooperate. It's against this backdrop that we've conducted action research to explore the possibility of changing the way we learn (rather than teach) by changing the posture of both the classic student and the teacher. We hypothesized that a tool we developed would encourage autonomy, the possibility of progressing at one's own pace, collaboration and learning using all available resources(other students, course materials, those on the web and the teacher/facilitator). Experimentation with this tool was carried out with around thirty German and French first-year students at the Université de Lorraine in Metz (France). The projected changesin the groups' learning situations were as follows: - use the flipped classroom approach but with a few traditional presentations by the teacher (materials having been put on a server) and lots of collective case solving, - engage students in their learning by inviting them to set themselves a primary objective from the outset, e.g. “Assimilating 90% of the course”, and secondary objectives (like a to-do list) such as “create a new case study for Tuesday”, - encourage students to take control of their learning (knowing at all times where they stand and how far they still have to go), - develop cooperation: the tool should encourage group work, the search for common solutions and the exchange of the best solutions with other groups. Those who have advanced much faster than the others, or who already have expertise in a subject, can become tutors for the others. A student can also present a case study he or she has developed, for example, or share materials found on the web or produced by the group, as well as evaluating the productions of others, - etc… A questionnaire and analysis of assessment results showed that the test group made considerable progress compared with a similar control group. These results confirmed our hypotheses. Obviously, this tool is only effective if the organization of teaching is adapted and if teachers are willing to change the way they work.Keywords: pedagogy, cooperation, university, learning environment
Procedia PDF Downloads 22769 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia
Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva
Abstract:
In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.Keywords: value, risk, creation, problems, organization
Procedia PDF Downloads 284768 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions
Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri
Abstract:
Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics
Procedia PDF Downloads 185767 Strategic Analysis of Loss of Urban Heritage in Bhopal City Due to Infrastructure Development
Authors: Aishwarya K. V., Shreya Sudesh
Abstract:
Built along the edges of a 11th century CE man-made lake, the city of Bhopal has stood witness to historic layers dating back to Palaeolithic times; early and medieval kingdoms ranging from the Parmaras, Pratiharas to tribal Gonds; the Begum-Nawabs and finally became the Capital of Madhya Pradesh, post-Independence. The lake more popularly called the Upper Lake was created by the King Raja Bhoj from the Parmara dynasty in 1010 AD when he constructed a bund wall across the Kolans river. Atop this bund wall lies the Kamlapati Mahal - which was part of the royal enclosure built in 1702 belonging to the Gond Kingdom. The Mahal is the epicentre of development in the city because it lies in the centre of the axis joining the Old core and New City. Rapid urbanisation descended upon the city once it became the administrative capital of Madhya Pradesh, a newly-formed state of an Independent India. Industrial pockets began being set up and refugees from the Indo-Pakistan separation settled in various regions of the city. To cater to these sudden growth, there was a boom in infrastructure development in the late twentieth century which included precarious decisions made in terms of handling heritage sites causing the destruction of significant parts of the historic fabric. And this practice continues to this day as buffer/ protected zones are breached through exemptions and the absence of robust regulations allow further deterioration of urban heritage. The aim of the research is to systematically study in detail the effect of the urban infrastructure development of the city and its adverse effect on the existing heritage fabric. Through the paper, an attempt to study the parameters involved in preparing the Masterplan of the city and other development projects is done. The research would follow a values-led approach to study the heritage fabric where the significance of the place is assessed based on the values attributed by stakeholders. This approach will involve collection and analysis of site data, assessment of the significance of the site and listing of potential. The study would also attempt to arrive at a solution to deal with urban development along with the protection of the heritage fabric.Keywords: heritage management, infrastructure development, urban conservation, urban heritage
Procedia PDF Downloads 225766 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 204765 Status Quo Bias: A Paradigm Shift in Policy Making
Authors: Divyansh Goel, Varun Jain
Abstract:
Classical economics works on the principle that people are rational and analytical in their decision making and their choices fall in line with the most suitable option according to the dominant strategy in a standard game theory model. This model has failed at many occasions in estimating the behavior and dealings of rational people, giving proof of some other underlying heuristics and cognitive biases at work. This paper probes into the study of these factors, which fall under the umbrella of behavioral economics and through their medium explore the solution to a problem which a lot of nations presently face. There has long been a wide disparity in the number of people holding favorable views on organ donation and the actual number of people signing up for the same. This paper, in its entirety, is an attempt to shape the public policy which leads to an increase the number of organ donations that take place and close the gap in the statistics of the people who believe in signing up for organ donation and the ones who actually do. The key assumption here is that in cases of cognitive dissonance, where people have an inconsistency due to conflicting views, people have a tendency to go with the default choice. This tendency is a well-documented cognitive bias known as the status quo bias. The research in this project involves an assay of mandated choice models of organ donation with two case studies. The first of an opt-in system of Germany (where people have to explicitly sign up for organ donation) and the second of an opt-out system of Austria (every citizen at the time of their birth is an organ donor and has to explicitly sign up for refusal). Additionally, there has also been presented a detailed analysis of the experiment performed by Eric J. Johnson and Daniel G. Goldstein. Their research as well as many other independent experiments such as that by Tsvetelina Yordanova of the University of Sofia, both of which yield similar results. The conclusion being that the general population has by and large no rigid stand on organ donation and are gullible to status quo bias, which in turn can determine whether a large majority of people will consent to organ donation or not. Thus, in our paper, we throw light on how governments can use status quo bias to drive positive social change by making policies in which everyone by default is marked an organ donor, which will, in turn, save the lives of people who succumb on organ transplantation waitlists and save the economy countless hours of economic productivity.Keywords: behavioral economics, game theory, organ donation, status quo bias
Procedia PDF Downloads 300764 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 87763 Grain Structure Evolution during Friction-Stir Welding of 6061-T6 Aluminum Alloy
Authors: Aleksandr Kalinenko, Igor Vysotskiy, Sergey Malopheyev, Sergey Mironov, Rustam Kaibyshev
Abstract:
From a thermo-mechanical standpoint, friction-stir welding (FSW) represents a unique combination of very large strains, high temperature and relatively high strain rate. The material behavior under such extreme deformation conditions is not studied well and thus, the microstructural examinations of the friction-stir welded materials represent an essential academic interest. Moreover, a clear understanding of the microstructural mechanisms operating during FSW should improve our understanding of the microstructure-properties relationship in the FSWed materials and thus enables us to optimize their service characteristics. Despite extensive research in this field, the microstructural behavior of some important structural materials remains not completely clear. In order to contribute to this important work, the present study was undertaken to examine the grain structure evolution during the FSW of 6061-T6 aluminum alloy. To provide an in-depth insight into this process, the electron backscatter diffraction (EBSD) technique was employed for this purpose. Microstructural observations were conducted by using an FEI Quanta 450 Nova field-emission-gun scanning electron microscope equipped with TSL OIMTM software. A suitable surface finish for EBSD was obtained by electro-polishing in a solution of 25% nitric acid in methanol. A 15° criterion was employed to differentiate low-angle boundaries (LABs) from high-angle boundaries (HABs). In the entire range of the studied FSW regimes, the grain structure evolved in the stir zone was found to be dominated by nearly-equiaxed grains with a relatively high fraction of low-angle boundaries and the moderate-strength B/-B {112}<110> simple-shear texture. In all cases, the grain-structure development was found to be dictated by an extensive formation of deformation-induced boundaries, their gradual transformation to the high-angle grain boundaries. Accordingly, the grain subdivision was concluded to the key microstructural mechanism. Remarkably, a gradual suppression of this mechanism has been observed at relatively high welding temperatures. This surprising result has been attributed to the reduction of dislocation density due to the annihilation phenomena.Keywords: electron backscatter diffraction, friction-stir welding, heat-treatable aluminum alloys, microstructure
Procedia PDF Downloads 236762 'The Cultural Sanctuary of Black Kafirs' Cultural and Tourism Promotion of Kalash Culture
Authors: Jamal Ahmad
Abstract:
The Sanctuary of the Kafirs is a sanctified place for the people of Kalash which contain the sacred remains of their culture. The existence of the cultural Sanctuary is not limited up to boundaries of culture but its canopy also contain the spiritual attachments in terms of religion, rituals, introspections, myths, customs and living standards. Culture is the manifestation of the human intellectual achievement in a qualitative phenomenon of a place. The ethnic people of Hindu Kush (Kalash) are an indigenous group that practices Animism. They believe in Animistic Symbology i-e the material universe has high spiritual power. The Animism in their living standard comes from the high spiritualized and sacred sacrifices of animals goats, sheep etc. in their festivals which is the symbol of purity. Similarly certain cultural and religious phenomena make its behavior, its living pattern, its fairy tales, its birth and even its death unique. The scattered and the vanishing fragments of the Kafiristan, demands the phenomenal solution which molds all these factors into preserving standards. It demands a place of belief where, their unique culture, religion, festivals and life style make a sincere base for future existence, and such phenomena of place will consciously or unconsciously molds these ideas into building fabric. The Sanctuary contains ancient vandalized cemetery, the qaliq* the mujnatikeen*, the jastaks*, dewadoor* an amphitheater for dancing and ritual performances, an herbal garden and a profile sanctuary of the blood line of Kalash. The Case-Analysis provokes a new architecture of place, as the Phenomenological Architecture, which requires a place and phenomenon to take place. The Animistic Symbology and Phenomenology both are the part of their life but needs to reveal its hidden meaning and existence i-e (The Balamain, the alpine meadows, the sacred river). The Architectural work is strengthened by the philosophies of Animism and Phenomenology which make it easy to understand. The Scope of work is to reincarnate the ethical boundaries between the neighboring tribes and the Kafirs, by a series of dwellings, cultural and religious communal buildings and spaces, gardens and streets layout under the umbrella of ethical beliefs of Kalash community. So we conclude to build the Sanctuary of the Kafirs, in Bamboret valley of Kalash.Keywords: Qaliq, Mujnatikeen, Dewadoor, Jastaks
Procedia PDF Downloads 334761 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 146760 Economic and Environmental Assessment of Heat Recovery in Beer and Spirit Production
Authors: Isabel Schestak, Jan Spriet, David Styles, Prysor Williams
Abstract:
Breweries and distilleries are well-known for their high water usage. The water consumption in a UK brewery to produce one litre of beer reportedly ranges from 3-9 L and in a distillery from 7-45 L to produce a litre of spirit. This includes product water such as mashing water, but also water for wort and distillate cooling and for cleaning of tanks, casks, and kegs. When cooling towers are used, cooling water can be the dominating water consumption in a brewery or distillery. Interlinked to the high water use is a substantial heating requirement for mashing, wort boiling, or distillation, typically met by fossil fuel combustion such as gasoil. Many water and waste water streams are leaving the processes hot, such as the returning cooling water or the pot ales. Therefore, several options exist to optimise water and energy efficiency of spirit production through heat recovery. Although these options are known in the sector, they are often not applied in practice due to planning efforts or financial obstacles. In this study, different possibilities and design options for heat recovery systems are explored in four breweries/distilleries in the UK and assessed from an economic but also environmental point of view. The eco-efficiency methodology, according to ISO 14045, is applied to combine both assessment criteria to determine the optimum solution for heat recovery application in practice. The economic evaluation is based on the total value added (TVA) while the Life Cycle Assessment (LCA) methodology is applied to account for the environmental impacts through the installations required for heat recovery. The four case study businesses differ in a) production scale with mashing volumes ranging from 2500 to 40,000 L, in b) terms of heating and cooling technology used, and in c) the extent to which heat recovery is/is not applied. This enables the evaluation of different cases for heat recovery based on empirical data. The analysis provides guidelines for practitioners in the brewing and distilling sector in and outside the UK for the realisation of heat recovery measures. Financial and environmental payback times are showcased for heat recovery systems in the four distilleries which are operating at different production scales. The results are expected to encourage the application of heat recovery where environmentally and economically beneficial and ultimately contribute to a reduction of the water and energy footprint in brewing and distilling businesses.Keywords: brewery, distillery, eco-efficiency, heat recovery from process and waste water, life cycle assessment
Procedia PDF Downloads 118759 Development of an Integrated Methodology for Fouling Control in Membrane Bioreactors
Authors: Petros Gkotsis, Anastasios Zouboulis, Manasis Mitrakas, Dimitrios Zamboulis, E. Peleka
Abstract:
The most serious drawback in wastewater treatment using membrane bioreactors (MBRs) is membrane fouling which gradually leads to membrane permeability decrease and efficiency deterioration. This work is part of a research project that aims to develop an integrated methodology for membrane fouling control, using specific chemicals which will enhance the coagulation and flocculation of compounds responsible for fouling, hence reducing biofilm formation on the membrane surface and limiting the fouling rate acting as a pre-treatment step. For this purpose, a pilot-scale plant with fully automatic operation achieved by means of programmable logic controller (PLC) has been constructed and tested. The experimental set-up consists of four units: wastewater feed unit, bioreactor, membrane (side-stream) filtration unit and permeate collection unit. Synthetic wastewater was fed as the substrate for the activated sludge. The dissolved oxygen (DO) concentration of the aerobic tank was maintained in the range of 2-3 mg/L during the entire operation by using an aerator below the membrane module. The membranes were operated at a flux of 18 LMH while membrane relaxation steps of 1 min were performed every 10 min. Both commercial and composite coagulants are added in different concentrations in the pilot-scale plant and their effect on the overall performance of the ΜΒR system is presented. Membrane fouling was assessed in terms of TMP, membrane permeability, sludge filterability tests, total resistance and the unified modified fouling index (UMFI). Preliminary tests showed that particular attention should be paid to the addition of the coagulant solution, indicating that pipe flocculation effectively increases hydraulic retention time and leads to voluminous sludge flocs. The most serious drawback in wastewater treatment using MBRs is membrane fouling, which gradually leads to membrane permeability decrease and efficiency deterioration. This results in increased treatment cost, due to high energy consumption and the need for frequent membrane cleaning and replacement. Due to the widespread application of MBR technology over the past few years, it becomes clear that the development of a methodology to mitigate membrane fouling is of paramount importance. The present work aims to develop an integrated technique for membrane fouling control in MBR systems and, thus, contribute to sustainable wastewater treatment.Keywords: coagulation, membrane bioreactor, membrane fouling, pilot plant
Procedia PDF Downloads 309