Search results for: panel unit root
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4159

Search results for: panel unit root

1129 Compaction of Municipal Solid Waste

Authors: Jovana Jankovic Pantic, Dragoslav Rakic, Tina Djuric, Irena Basaric Ikodinovic, Snezana Bogdanovic

Abstract:

Regardless of the numerous activities undertaken to reduce municipal solid waste, its annual volumes continue to grow. In Serbia, the most common and the only one form of waste disposal is at municipal landfills with daily compaction and soil covering. Municipal waste compacting is one of the basic components of the disposal process. Well compacted waste takes up less volume and allows much safer storage. In order to better predict the behavior of municipal waste at landfills, it is necessary to define compaction parameters: the maximum dry unit weight and optimal moisture content. In current geotechnical practice, the most common method of determination compaction parameters is by the standard method (Proctor compaction test) used in soil mechanics, with an eventual reduction of compaction energy. Although this methodology is accepted in newer geotechnical scientific discipline "waste mechanics", different treatments of municipal waste at the landfill itself (including pretreatment), indicate the need to change this classical approach. The main reason for that is the simulation of the operation of compactors (hedgehogs) at the landfill. Therefore, during the research, various innovative solutions are introduced, such as changing the classic flat Proctor hammer, by adding spikes, whose function is, in addition to compaction, destruction and shredding of municipal waste. The paper presents the behavior of municipal waste for four synthetic waste samples with different waste compositions (Plandište landfill). The samples were tested in standard Proctor apparatus at the same compaction energy, but with two different hammers: standard flat hammer and hammer with spikes.

Keywords: compaction, hammer with spikes, landfill, municipal solid waste, proctor compaction test

Procedia PDF Downloads 217
1128 Effect of Barium Doping on Structural, Morphological, Optical, and Photocatalytic Properties of Sprayed ZnO Thin Films

Authors: Halima Djaaboube, Redha Aouati, Ibtissem Loucif, Yassine Bouachiba, Mouad Chettab, Adel Taabouche, Sihem Abed, Salima Ouendadji, Abderrahmane Bouabellou

Abstract:

Thin films of pure and barium-doped zinc oxide (ZnO) were prepared using spray pyrolysis process. The films were deposited on glass substrates at 450°C. The different samples are characterized by X-ray diffraction (XRD) and UV-Vis spectroscopy. X-ray diffraction patterns reveal the formation of a single ZnO Wurtzite structure and the good crystallinity of the films. The substitution of Ba ions influences the texture of the layers and makes the (002) plane a preferential growth plane. At concentrations below 6% Ba, the hexagonal structure of ZnO undergoes compressive stresses due to barium ions which have a radius twice of the Zn ions. This result leads to the decrees of a and c parameters and therefore the volume of the unit cell. This result is confirmed by the decrease in the number of crystallites and the increase in the size of the crystallites. At concentrations above 6%, barium substitutes the zinc atom and modifies the structural parameters of the thin layers. The bandgap of ZnO films decreased with increasing doping, this decrease is probably due to the 4d orbitals of the Ba atom due to the sp-d spin-exchange interactions between the band electrons and the localized d-electrons of the substituted Ba ion. Although, the Urbache energy undergoes an increase which implies the creation of energy levels below the conduction band and decreases the band gap width. The photocatalytic activity of ZnO doped 9% Ba was evaluated by the photodegradation of methylene blue under UV irradiation.

Keywords: barium, doping, photodegradation, spray pyrolysis, ZnO.

Procedia PDF Downloads 115
1127 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit

Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana

Abstract:

Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.

Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification

Procedia PDF Downloads 151
1126 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments

Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy

Abstract:

Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.

Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing

Procedia PDF Downloads 273
1125 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System

Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng

Abstract:

Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.

Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging

Procedia PDF Downloads 364
1124 InP Nanocrystals Core and Surface Electronic Structure from Ab Initio Calculations

Authors: Hamad R. Jappor, Zeyad Adnan Saleh, Mudar A. Abdulsattar

Abstract:

The ab initio restricted Hartree-Fock method is used to simulate the electronic structure of indium phosphide (InP) nanocrystals (NCs) (216-738 atoms) with sizes ranging up to about 2.5 nm in diameter. The calculations are divided into two parts, surface, and core. The oxygenated (001)-(1×1) facet that expands with larger sizes of nanocrystals is investigated to determine the rule of the surface in nanocrystals electronic structure. Results show that lattice constant and ionicity of the core part show decreasing order as nanocrystals grow up in size. The smallest investigated nanocrystal is 1.6% larger in lattice constant and 131.05% larger in ionicity than the converged value of largest investigated nanocrystal. Increasing nanocrystals size also resulted in an increase of core cohesive energy (absolute value), increase of core energy gap, and increase of core valence. The surface states are found mostly non-degenerated because of the effect of surface discontinuity and oxygen atoms. Valence bandwidth is wider on the surface due to splitting and oxygen atoms. The method also shows fluctuations in the converged energy gap, valence bandwidth and cohesive energy of core part of nanocrystals duo to shape variation. The present work suggests the addition of ionicity and lattice constant to the quantities that are affected by quantum confinement phenomenon. The method of the present model has threefold results; it can be used to approach the electronic structure of crystals bulk, surface, and nanocrystals.

Keywords: InP, nanocrystals core, ionicity, Hartree-Fock method, large unit cell

Procedia PDF Downloads 394
1123 Empirical Studies of Indigenous Career Choice in Taiwan

Authors: Zichun Chu

Abstract:

The issue of tribal poverty has always attracted attentions. Due to social and economic difficulties, the indigenous people's personal development and tribal development have been greatly restricted. Past studies have pointed out that poverty may come from a lack of education. The United Nations Sustainable Development Goals (SDGs) also stated that if we are to solve the poverty problem, providing education widely is an important key. According to the theory of intellectual capital adaptation, “being capable” and “willing to do” are the keys of development. Therefore, we can say that the "ability" and "will" of tribal residents for their tribal development is the core concern of the tribal development. This research was designed to investigate the career choice development model of indigenous tribe people by investigating the current status of human capital, social capital, and cultural capital of tribal residents. This study collected 327 questionnaires (70% of total households) from Truku tribe to answer the research question: Did education help them for job choosing decisions from the aspects of human capital, social capital, and cultural capital in tribal status. This project highlighted the ‘single tribal research approach’ to gain an in-depth understanding of the human capital formed under the unique culture of the tribe (Truku tribe). The results show that the education level of most research participants was high school, very few high school graduates chose to further their education to college level; due to the lack of education of their parents, the social capital was limited to support them for jobs choice, most of them work for labor and service industries; however, their culture capital was comparably rich for works, the sharing culture of Taiwanese indigenous people made their work status stable. The results suggested that we should emphasize more on the development of vocational education based on the tribe’s location and resources. The self-advocacy of indigenous people should be developed so that they would gain more power on making career decisions. This research project is part of a pilot project called “INDIGENOUS PEOPLES, POVERTY, AND DEVELOPMENT,” sponsored by the National Science and Technology Council of Taiwan. If this paper were accepted to present in the 2023 ICIP, it would be lovely if a panel is formed for me and other co-researchers (Chuanju Cheng, Chih-Yuan Weng, and YiXuan Chen), for the audience will be able to get a full picture of this pilot project.

Keywords: career choices, career model, indegenous career development, indigenous education, tribe

Procedia PDF Downloads 77
1122 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang

Abstract:

The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia PDF Downloads 92
1121 Dynamic Modeling of the Impact of Chlorine on Aquatic Species in Urban Lake Ecosystem

Authors: Zhiqiang Yan, Chen Fan, Yafei Wang, Beicheng Xia

Abstract:

Urban lakes play an invaluable role in urban water systems such as flood control, water supply, and public recreation. However, over 38% of the urban lakes have suffered from severe eutrophication in China. Chlorine that could remarkably inhibit the growth of phytoplankton in eutrophic, has been widely used in the agricultural, aquaculture and industry in the recent past. However, little information has been reported regarding the effects of chlorine on the lake ecosystem, especially on the main aquatic species.To investigate the ecological response of main aquatic species and system stability to chlorine interference in shallow urban lakes, a mini system dynamic model was developed based on the competition and predation of main aquatic species and total phosphorus circulation. The main species of submerged macrophyte, phytoplankton, zooplankton, benthos, spiroggra and total phosphorus in water and sediment were used as variables in the model,while the interference of chlorine on phytoplankton was represented by an exponential attenuation equation. Furthermore, the eco-exergy expressing the development degree of ecosystem was used to quantify the complexity of the shallow urban lake. The model was validated using the data collected in the Lotus Lake in Guangzhoufrom1 October 2015 to 31 January 2016.The correlation coefficient (R), root mean square error-observations standard deviation ratio (RSR) and index of agreement (IOA) were calculated to evaluate accuracy and reliability of the model.The simulated values showed good qualitative agreement with the measured values of all components. The model results showed that chlorine had a notable inhibitory effect on Microcystis aeruginos,Rachionus plicatilis, Diaphanosoma brachyurum Liévin and Mesocyclops leuckarti (Claus).The outbreak of Spiroggra.spp. inhibited the growth of Vallisneria natans (Lour.) Hara, leading to a gradual decrease of eco-exergy and the breakdown of ecosystem internal equilibria. This study gives important insight into using chlorine to achieve eutrophication control and understand mechanism process.

Keywords: system dynamic model, urban lake, chlorine, eco-exergy

Procedia PDF Downloads 231
1120 Evaluation of Liquid Fermentation Strategies to Obtain a Biofertilizer Based on Rhizobium sp.

Authors: Andres Diaz Garcia, Ana Maria Ceballos Rojas, Duvan Albeiro Millan Montano

Abstract:

This paper describes the initial technological development stages in the area of liquid fermentation required to reach the quantities of biomass of the biofertilizer microorganism Rhizobium sp. strain B02, for the application of the unitary stages downstream at laboratory scale. In the first stage, the adjustment and standardization of the fermentation process in conventional batch mode were carried out. In the second stage, various fed-batch and continuous fermentation strategies were evaluated in 10L-bioreactor in order to optimize the yields in concentration (Colony Forming Units/ml•h) and biomass (g/l•h), to make feasible the application of unit operations downstream of process. The growth kinetics, the evolution of dissolved oxygen and the pH profile generated in each of the strategies were monitored and used to make sequential adjustments. Once the fermentation was finished, the final concentration and viability of the obtained biomass were determined and performance parameters were calculated with the purpose of select the optimal operating conditions that significantly improved the baseline results. Under the conditions adjusted and standardized in batch mode, concentrations of 6.67E9 CFU/ml were reached after 27 hours of fermentation and a subsequent noticeable decrease was observed associated with a basification of the culture medium. By applying fed-batch and continuous strategies, significant increases in yields were achieved, but with similar concentration levels, which involved the design of several production scenarios based on the availability of equipment usage time and volume of required batch.

Keywords: biofertilizer, liquid fermentation, Rhizobium sp., standardization of processes

Procedia PDF Downloads 173
1119 Maryland Restoration of Anterior Tooth Loss as a Minimal Invasive Dentistry: An Alternative Treatment

Authors: B. Oral, C. Bal, M. S. Kar, A. Akgürbüz

Abstract:

Loss of maxillary central incisors occurs in many patients, and the treatment of young adults with this problem is a challenge for both prosthodontists and orthodontists. Common treatment alternatives are distalization of adjacent teeth and fabrication of a conventional 3-unit fixed partial denture, a single implant supported crown restoration or a resin-bonded fixed partial denture. This case report describes the indication of a resin-bonded fixed partial denture, preparation of the abutment teeth and the prosthetic procedures. The technique described here represents a conservative, esthetically pleasing and rapid solution for the missing maxillary central incisor when implant placement and/or guided bone regeneration techniques are not feasible because of financial, social or time restrictions. In this case a 16 year-old female patient who lost her maxillary left central incisor six years ago in a bicycle accident applied to our clinic with a major complaint of her unaesthetic appearance associated with the loss of her maxillary left central incisor. Although there was an indication for orthodontic treatment because of the limited space at the traumatized area, the patient did not accept to receive any orthodontic procedure. That is why an implant supported restoration could not be an option for the narrow area. Therefore maryland bridge as a minimal invasive dental therapy was preferred as a retention appliance so the patient's aesthetic appearance was restored.

Keywords: Maryland bridge, single tooth restoration, aesthetics, maxillary central incisors

Procedia PDF Downloads 356
1118 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 124
1117 Evaluation of Triage Performance: Nurse Practice and Problem Classifications

Authors: Atefeh Abdollahi, Maryam Bahreini, Babak Choobi Anzali, Fatemeh Rasooli

Abstract:

Introduction: Triage becomes the main part of organization of care in Emergency department (ED)s. It is used to describe the sorting of patients for treatment priority in ED. The accurate triage of injured patients has reduced fatalities and improved resource usage. Besides, the nurses’ knowledge and skill are important factors in triage decision-making. The ability to define an appropriate triage level and their need for intervention is crucial to guide to a safe and effective emergency care. Methods: This is a prospective cross-sectional study designed for emergency nurses working in four public university hospitals. Five triage workshops have been conducted every three months for emergency nurses based on a standard triage Emergency Severity Index (ESI) IV slide set - approved by Iranian Ministry of Health. Most influential items on triage performance were discussed through brainstorming in workshops which then, were peer reviewed by five emergency physicians and two head registered nurses expert panel. These factors that might distract nurse’ attention from proper decisions included patients’ past medical diseases, the natural tricks of triage and system failure. After permission had been taken, emergency nurses participated in the study and were given the structured questionnaire. Data were analysed by SPSS 21.0. Results: 92 emergency nurses enrolled in the study. 30 % of nurses reported the past history of chronic disease as the most influential confounding factor to ascertain triage level, other important factors were the history of prior admission, past history of myocardial infarction and heart failure to be 20, 17 and 11 %, respectively. Regarding the concept of difficulties in triage practice, 54.3 % reported that the discussion with patients and family members was difficult and 8.7 % declared that it is hard to stay in a single triage room whole day. Among the participants, 45.7 and 26.1 % evaluated the triage workshops as moderately and highly effective, respectively. 56.5 % reported overcrowding as the most important system-based difficulty. Nurses were mainly doubtful to differentiate between the triage levels 2 and 3 according to the ESI VI system. No significant correlation was found between the work record of nurses in triage and the uncertainty in determining the triage level and difficulties. Conclusion: The work record of nurses hardly seemed to be effective on the triage problems and issues. To correct the deficits, training workshops should be carried out, followed by continuous refresher training and supportive supervision.

Keywords: assessment, education, nurse, triage

Procedia PDF Downloads 227
1116 A Critical Appraisal of Adekunle Ajasin University Policy on Internet Resource Centre in Service Delivery Adekunle Ajasin University, Akungba-Akoko, Ondo State

Authors: Abimbola Olaotan Akinsete

Abstract:

Government all over the world has intensified efforts in making internet and resource centres readily available in public institutions and centres for the advancement of humanity and working processes. Information and communication resource centre will not only help in the reduction of task that are presumed to be herculean. This centres influenced the working rate and productivity of both staffs and students and its benefit. The utilization of the internet and information resource centre will not only speed up service delivery, working time and efficiency of the system. Information and Communication Technology plays significant roles in presenting equalization strategy for developing university community and improving educational service delivery. This equalization will not only advance, accelerate and ensure results are accessed electronically, ensuring the transfer and confirmation of students’ academic records and their results in the world without physically available to request for these services. This study seeks to make Critical Appraisal of Adekunle Ajasin University Policy on Internet Resource Centre in Service Delivery Adekunle Ajasin University, Akungba-Akoko, Ondo State. The study employ descriptive survey design method in identifying hindrances of the non-utilization of technology in the service delivery in the university. Findings revealed that the adoption of internet and resource centre in the Exams and Records unit of the University shall help in delivering more in students’ records/results processing.

Keywords: internet, resource, centre, policy and service delivery

Procedia PDF Downloads 98
1115 Towards Carbon-Free Communities: A Compilation of Urban Design Criteria for Sustainable Neighborhoods

Authors: Atefeh Kalantari

Abstract:

The increase in population and energy consumption has caused environmental crises such as the energy crisis, increased pollution, and climate change, all of which have resulted in a decline in the quality of life, especially in urban environments. Iran is one of the developing countries which faces several challenges concerning energy use and environmental sustainability such as air pollution, climate change, and energy security. On the other hand, due to its favorable geographic characteristics, Iran has diverse and accessible renewable sources, which provide appropriate substitutes to reduce dependence on fossil fuels. Sustainable development programs and post-carbon cities rely on implementing energy policies in different sectors of society, particularly, the built environment sector is one of the main ones responsible for energy consumption and carbon emissions for cities. Because of this, several advancements and programs are being implemented to promote energy efficiency for urban planning, and city experts, like others, are looking for solutions to deal with these problems. Among the solutions provided for this purpose, low-carbon design can be mentioned. Among the different scales, the neighborhood can be mentioned as a suitable scale for applying the principles and solutions of low-carbon urban design; Because the neighborhood as a "building unit of the city" includes elements and flows that all affect the number of CO2 emissions. The article aims to provide criteria for designing a low-carbon and carbon-free neighborhood through descriptive methods and secondary data analysis. The ultimate goal is to promote energy efficiency and create a more resilient and livable environment for local residents.

Keywords: climate change, low-carbon urban design, carbon-free neighborhood, resilience

Procedia PDF Downloads 74
1114 Processes and Application of Casting Simulation and Its Software’s

Authors: Surinder Pal, Ajay Gupta, Johny Khajuria

Abstract:

Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.

Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes

Procedia PDF Downloads 474
1113 Buddhism: Its Socio-Economic Relevance in the Present Changing World

Authors: Bandana Bhattacharya

Abstract:

‘Buddhism’, as such signifies the ‘ism’ that is based on Buddha’s life and teachings or that is concerned with the gospel of Buddha as recorded in the literature available in Pali, Sanskrit, Buddhist Sanskrit, Prakrit and even in the other non-Indian languages wherein it has been described a very abstruse, complex and lofty philosophy of life or ‘the way of life’ preached by Him (Buddha). It has another side too, i.e., the applicability of the tenets of Buddha according to the needs of the present society, where human life and outlook has been totally changed. Applied Buddhism signifies the applicability of the Buddha’s noble tenets. Along with the theological exposition and textual criticism of the Buddha’s discourses, it has now become almost obligatory for the Buddhist scholars to re-interpret Buddhism from modern perspectives. Basically Applied Buddhism defined a ‘way of life’ which may transform the higher quality of life or essence of life due to changed circumstances, places and time. Nowadays, if we observe the present situation of the world, we will find the current problems such as health, economic, politic, global warming, population explosion, pollution of all types including cultural scarcity essential commodities and indiscriminate use of human, natural and water resources are becoming more and more pronounced day by day, under such a backdrop of world situation. Applied Buddhism rather Buddhism may be the only instrument left now for mankind to address all such human achievements, lapses, and problems. Buddha’s doctrine is itself called ‘akālika, timeless’. On the eve of the Mahāparinibbāṇa at Kusinara, the Blessed One allows His disciples to change, modify and alter His minor teachings according to the needs of the future, although He has made some utterances, which would eternally remain fresh. Hence Buddhism has been able to occupy a prominent place in modern life, because of its timeless applicability, emanating from a set of eternal values. The logical and scientific outlook of Buddha may be traced in His very first sermon named the Dhammacakkapavattana-Sutta where He suggested to avoid the two extremes, namely, constantly attachment to sensual pleasures (Kāmasukhallikānuyoga) and devotion to self-mortification that is painful as well as unprofitable and asked to adopt Majjhimapaṭipadā, ‘Middle path’, which is very much applicable even today in every spheres of human life; and the absence of which is the root cause of all problems event at present. This paper will be a humble attempt to highlight the relevance of Buddhism in the present society.

Keywords: applied Buddhism, ecology, self-awareness, value

Procedia PDF Downloads 122
1112 Communication and Management of Incidental Pathology in a Cohort of 1,214 Consecutive Appendicectomies

Authors: Matheesha Herath, Ned Kinnear, Bridget Heijkoop, Eliza Bramwell, Alannah Frazetto, Amy Noll, Prajay Patel, Derek Hennessey, Greg Otto, Christopher Dobbins, Tarik Sammour, James Moore

Abstract:

Background: Important incidental pathology requiring further action is commonly found during appendicectomy, macro- and microscopically. It is unknown whether the acute surgical unit (ASU) model affects the management and disclosure of these findings. Methods: An ASU model was introduced at our institution on 01/08/2012. In this retrospective cohort study, all patients undergoing appendicectomy 2.5 years before (traditional group) or after (ASU group) this date were compared. The primary outcomes were rates of appropriate management of the incidental findings and communication of the findings to the patient and to their general practitioner (GP). Results: 1,214 patients underwent emergency appendicectomy; 465 in the traditional group and 749 in the ASU group. 80 (6.6%) patients (25 and 55 in each respective period) had important incidental findings. There were 24 patients with benign polyps, 15 with neuro-endocrine tumour, 11 with endometriosis, 8 with pelvic inflammatory disease, 8 Enterobius vermicularis infection, 7 with low grade mucinous cystadenoma, 3 with inflammatory bowel disease, 2 with diverticulitis, 2 with tubo-ovarian mass, 1 with secondary appendiceal malignancy and none with primary appendiceal adenocarcinoma. One patient had dual pathologies. There was no difference between the traditional and ASU group with regards to communication of the findings to the patient (p=0.44) and their GP (p=0.27), and there was no difference in the rates of appropriate management (p=0.21). Conclusions: The introduction of an ASU model did not change rates of surgeon-to-patient and surgeon-to-GP communication nor affect rates of appropriate management of important incidental pathology during an appendectomy.

Keywords: acute care surgery, appendicitis, appendicectomy, incidental

Procedia PDF Downloads 140
1111 Prioritizing Temporary Shelter Areas for Disaster Affected People Using Hybrid Decision Support Model

Authors: Ashish Trivedi, Amol Singh

Abstract:

In the recent years, the magnitude and frequency of disasters have increased at an alarming rate. Every year, more than 400 natural disasters affect global population. A large-scale disaster leads to destruction or damage to houses, thereby rendering a notable number of residents homeless. Since humanitarian response and recovery process takes considerable time, temporary establishments are arranged in order to provide shelter to affected population. These shelter areas are vital for an effective humanitarian relief; therefore, they must be strategically planned. Choosing the locations of temporary shelter areas for accommodating homeless people is critical to the quality of humanitarian assistance provided after a large-scale emergency. There has been extensive research on the facility location problem both in theory and in application. In order to deliver sufficient relief aid within a relatively short timeframe, humanitarian relief organisations pre-position warehouses at strategic locations. However, such approaches have received limited attention from the perspective of providing shelters to disaster-affected people. In present research work, this aspect of humanitarian logistics is considered. The present work proposes a hybrid decision support model to determine relative preference of potential shelter locations by assessing them based on key subjective criteria. Initially, the factors that are kept in mind while locating potential areas for establishing temporary shelters are identified by reviewing extant literature and through consultation from a panel of disaster management experts. In order to determine relative importance of individual criteria by taking into account subjectivity of judgements, a hybrid approach of fuzzy sets and Analytic Hierarchy Process (AHP) was adopted. Further, Technique for order preference by similarity to ideal solution (TOPSIS) was applied on an illustrative data set to evaluate potential locations for establishing temporary shelter areas for homeless people in a disaster scenario. The contribution of this work is to propose a range of possible shelter locations for a humanitarian relief organization, using a robust multi criteria decision support framework.

Keywords: AHP, disaster preparedness, fuzzy set theory, humanitarian logistics, TOPSIS, temporary shelters

Procedia PDF Downloads 197
1110 Programmed Cell Death in Datura and Defensive Plant Response toward Tomato Mosaic Virus

Authors: Asma Alhuqail, Nagwa Aref

Abstract:

Programmed cell death resembles a real nature active defense in Datura metel against TMV after three days of virus infection. Physiological plant response was assessed for asymptomatic healthy and symptomatic infected detached leaves. The results indicated H2O2 and Chlorophyll-a as the most potential parameters. Chlorophyll-a was considered the only significant predictor variant for the H2O2 dependent variant with a P value of 0.001 and R-square of 0.900. The plant immune response was measured within three days of virus infection using the cutoff value of H2O2 (61.095 lmol/100 mg) and (63.201 units) for the tail moment in the Comet Assay. Their percentage changes were 255.12% and 522.40% respectively which reflects the stress of virus infection in the plant. Moreover, H2O2 showed 100% specificity and sensitivity in the symptomatic infected group using the receiver-operating characteristic (ROC). All tested parameters in the symptomatic infected group had significant correlations with twenty-five positive and thirty-one negative correlations where the P value was <0.05 and 0.01. Chlorophyll-a parameter had a crucial role of highly significant correlation between total protein and salicylic acid. Contrarily, this correlation with tail moment unit was (r = _0.930, P <0.01) where the P value was < 0.01. The strongest significant negative correlation was between Chlorophyll-a and H2O2 at P < 0.01, while moderate negative significant correlation was seen for Chlorophyll-b where the P value < 0.05. The present study discloses the secret of the three days of rapid transient production of activated oxygen species (AOS) that was enough for having potential quantitative physiological parameters for defensive plant response toward the virus.

Keywords: programmed cell death, plant–adaptive immune response, hydrogen peroxide (H2O2), physiological parameters

Procedia PDF Downloads 244
1109 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 280
1108 Study and Evaluation of Occupational Health and Safety in Power Plant in Pakistan

Authors: Saira Iqbal

Abstract:

Occupational Health and Safety issues nowadays have become an important esteem in the context of Industrial Production. This study is designed to measure the workplace hazards at Kohinoor Energy Limited. Mainly focused hazards were Heat Stress, Noise Level, Light Level and Ergonomics. Measurements for parameters like Wet, Dry, Globe, WBGTi and RH% were taken directly by visiting the Study Area. The temperature in Degrees was recoded at Control Room and Engine Hall. Highest Temperature was recoded in Engine Hall which was about 380C. Efforts were made to record emissions of Noise Levels from the main area of concern like Engines in Engine hall, parking area, and mechanical workshop. Permissible level for measuring Noise is 85 and its Unit of Measurement is dB (A). In Engine Hall Noise was very high which was about 109.6 dB (A) and that level was exceeding the limits. Illumination Level was also recorded at different areas of Power Plant. The light level was though under permissible limits but in some areas like Engine Hall and Boiler Room, level of light was very low especially in Engine Hall where the level was 29 lx. Practices were performed for measuring hazards in context of ergonomics like extended reaching, deviated body postures, mechanical stress, and vibration exposures of the worker at different units of plants by just observing workers during working hours. Since KEL is ISO 8000 and 14000 certified, the researcher found no serious problems in the parameter Ergonomics however it was a common scenario that workers were reluctant to apply PPEs.

Keywords: workplace hazards, heat hazard, noise hazard, illumination, ergonomics

Procedia PDF Downloads 317
1107 Topology Optimization of Heat Exchanger Manifolds for Aircraft

Authors: Hanjong Kim, Changwan Han, Seonghun Park

Abstract:

Heat exchanger manifolds in aircraft play an important role in evenly distributing the fluid entering through the inlet to the heat transfer unit. In order to achieve this requirement, the manifold should be designed to have a light weight by withstanding high internal pressure. Therefore, this study aims at minimizing the weight of the heat exchanger manifold through topology optimization. For topology optimization, the initial design space was created with the inner surface extracted from the currently used manifold model and with the outer surface having a dimension of 243.42 mm of X 74.09 mm X 65 mm. This design space solid model was transformed into a finite element model with a maximum tetrahedron mesh size of 2 mm using ANSYS Workbench. Then, topology optimization was performed under the boundary conditions of an internal pressure of 5.5 MPa and the fixed support for rectangular inlet boundaries by SIMULIA TOSCA. This topology optimization produced the minimized finial volume of the manifold (i.e., 7.3% of the initial volume) based on the given constraints (i.e., 6% of the initial volume) and the objective function (i.e., maximizing manifold stiffness). Weight of the optimized model was 6.7% lighter than the currently used manifold, but after smoothing the topology optimized model, this difference would be bigger. The current optimized model has uneven thickness and skeleton-shaped outer surface to reduce stress concentration. We are currently simplifying the optimized model shape with spline interpolations by reflecting the design characteristics in thickness and skeletal structures from the optimized model. This simplified model will be validated again by calculating both stress distributions and weight reduction and then the validated model will be manufactured using 3D printing processes.

Keywords: topology optimization, manifold, heat exchanger, 3D printing

Procedia PDF Downloads 241
1106 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities

Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos

Abstract:

The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.

Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification

Procedia PDF Downloads 471
1105 Sector-Wide Collaboration to Reduce Food Waste

Authors: Carolyn Cameron

Abstract:

Stop Food Waste Australia is working with the industry to co-design sector action plans to prevent and reduce food waste across the supply chain. We are a public-private partnership, funded in 2021 by the Australian national government under the 2017 National Food Waste Strategy. Our partnership has representatives from all levels of government, industry associations from farm to fork, and food rescue groups. Like many countries, Australia has adopted the Sustainable Development Goal (SDG) target of 12.3 to halve food waste by 2030. A seminal 2021 study, the National Food Waste Feasibility Report, developed a robust national baseline, illustrating hotspots in commodities and across the supply chain. This research found that the consumption stages – households, food service, and institutions - account for over half of all food waste, and 22% of food produced never leaves the farm gate. Importantly the study found it is feasible for Australia to meet SDG 12.3, but it will require unprecedented action by governments, industry, and the community. Sector Action Plans (Plan) are one of the four main initiatives of Stop Food Waste Australia, including a voluntary commitment, a coordinated food waste communications hub, and robust monitoring and reporting framework. These plans provide a systems-based approach to reducing food loss and waste while realising multiple benefits for supply chain partners and other collaborators. Each plan is being co-designed with the key stakeholders most able to directly control or influence the root cause(s) of food waste hotspots and to take action to reduce or eliminate food waste in their value chain.  The initiatives in the Plans are fit-for-purpose, reflecting current knowledge and recognising priorities may refocus over time. To date, sector action plans have been developed with the Food Rescue, Cold Chain, Bread and Bakery, and Dairy Sectors. Work is currently underway on Meat and Horticulture, and we are also developing supply-chain stage plans for food services and institutions. The study will provide an overview of Australia’s food waste baseline and challenges, the important role of sector action plans in reducing food waste, and case studies of implementation outcomes.

Keywords: co-design, horticulture, sector action plans, voluntary

Procedia PDF Downloads 128
1104 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 295
1103 A Survey on the Supervision Experience of Full-Time Intern Counseling Psychologist

Authors: Szu-Fan Chen, Cheng-Tseng Lin, Ting-Chia Lien

Abstract:

This study focuses on understanding the current supervision experience of full-time intern counseling psychologists in Taiwan. This study took 197 full-time intern counseling psychologists as the research subjects, including 146 women (74%) and 51 men (26%). In terms of internship sites, the largest number of internships are in school sites (59%), followed by community sites (30%), and only 11% in medical fields or corporate sites. In addition, a survey was conducted on whether the subjects had full-time jobs before full-time internships. 42% did not have full-time workers, and 48% had full-time workers. However, among those who had full-time workers, 28% were engaged in work related to psychological counseling. 20% are engaged in work unrelated to psychological counseling. In the sample of this study, each person interviewed on average 2.68 internship institutions in total, and the current internship unit is the 2.29th institution interviewed. All (100%) full-time intern psychologists have entered into individual internship contracts with internship institutions. In terms of professional supervisor candidates, a total of 178 (90%) supervisors were appointed by internal personnel of the institution, and a total of 19 (10%) were hired as supervisors from outside the institution. Regarding the form of supervision, it is mostly conducted through individual supervision (98%), and up to 60% is conducted through discussion of written/oral case reports. In terms of supervision satisfaction, 47% were very satisfied, 28% were satisfied, 18% were OK, and 6% were dissatisfied. Based on the above research findings, relevant, specific, and feasible suggestions are put forward for reference by consultation practitioners and future researchers.

Keywords: full-time intern counseling psychologist, supervision experience, full-time intership, supervision

Procedia PDF Downloads 1
1102 Tiebout and Crime: How Crime Affect the Income Tax Capacity

Authors: Nik Smits, Stijn Goeminne

Abstract:

Despite the extensive literature on the relation between crime and migration, not much is known about how crime affects the tax capacity of local communities. This paper empirically investigates whether the Flemish local income tax base yield is sensitive to changes in the local crime level. The underlying assumptions are threefold. In a Tiebout world, rational voters holding the local government accountable for the safety of its citizens, move out when the local level of security gets too much alienated from what they want it to be (first assumption). If migration is due to crime, then the more wealthy citizens are expected to move first (second assumption). Looking for a place elsewhere implies transaction costs, which the more wealthy citizens are more likely to be able to pay. As a consequence, the average income per capita and so the income distribution will be affected, which in turn, will influence the local income tax base yield (third assumption). The decreasing average income per capita, if not compensated by increasing earnings by the citizens that are staying or by the new citizens entering the locality, must result in a decreasing local income tax base yield. In the absence of a higher level governments’ compensation, decreasing local tax revenues could prove to be disastrous for a crime-ridden municipality. When communities do not succeed in forcing back the number of offences, this can be the onset of a cumulative process of urban deterioration. A spatial panel data model containing several proxies for the local level of crime in 306 Flemish municipalities covering the period 2000-2014 is used to test the relation between crime and the local income tax base yield. In addition to this direct relation, the underlying assumptions are investigated as well. Preliminary results show a modest, but positive relation between local violent crime rates and the efflux of citizens, persistent up until a 2 year lag. This positive effect is dampened by possible increasing crime rates in neighboring municipalities. The change in violent crimes -and to a lesser extent- thefts and extortions reduce the influx of citizens with a one year lag. Again this effect is diminished by external effects from neighboring municipalities, meaning that increasing crime rates in neighboring municipalities (especially violent crimes) have a positive effect on the local influx of citizens. Crime also has a depressing effect on the average income per capita within a municipality, whereas increasing crime rates in neighboring municipalities increase it. Notwithstanding the previous results, crime does not seem to significantly affect the local tax base yield. The results suggest that the depressing effect of crime on the income basis has to be compensated by a limited, but a wealthier influx of new citizens.

Keywords: crime, local taxes, migration, Tiebout mobility

Procedia PDF Downloads 302
1101 Energy Consumption, Emission Absorption and Carbon Emission Reduction on Semarang State University Campus

Authors: Dewi Liesnoor Setyowati, Puji Hardati, Tri Marhaeni Puji Astuti, Muhammad Amin

Abstract:

Universitas Negeri Semarang (UNNES) is a university with a vision of conservation. The impact of the UNNES conservation is the existence of a positive response from the community for the effort of greening the campus and the planting of conservation value in the academic community. But in reality,  energy consumption in UNNES campus tends to increase. The objectives of the study were to analyze the energy consumption in the campus area, to analyze the absorption of emissions by trees and the awareness of UNNES citizens in reducing emissions. Research focuses on energy consumption, carbon emissions, and awareness of citizens in reducing emissions. Research subjects in this study are UNNES citizens (lecturers, students and employees). The research area covers 6 faculties and one administrative center building. Data collection is done by observation, interview and documentation. The research used a quantitative descriptive method to analyze the data. The number of trees in UNNES is 10,264. Total emission on campus UNNES is 7.862.281.56 kg/year, the tree absorption is 6,289,250.38 kg/year. In UNNES campus area there are still 1,575,031.18 kg/year of emissions, not yet absorbed by trees. There are only two areas of the faculty whose trees are capable of absorbing emissions. The awareness of UNNES citizens in reducing energy consumption is seen in change the habit of: using energy-saving equipment (65%); reduce energy consumption per unit (68%); do energy literacy for UNNES citizens (74%). UNNES leaders always provide motivation to the citizens of UNNES, to reduce and change patterns of energy consumption.

Keywords: energy consumption, carbon emission absorption, emission reduction, energy literation

Procedia PDF Downloads 243
1100 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 300