Search results for: simulation of hybrid energy systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9531

Search results for: simulation of hybrid energy systems

81 Neighborhood Sustainability Assessment Tools: A Conceptual Framework for Their Use in Building Adaptive Capacity to Climate Change

Authors: Sally Naji, Julie Gwilliam

Abstract:

Climate change remains a challenging matter for the human and the built environment in the 21st century, where the need to consider adaptation to climate change in the development process is paramount. However, there remains a lack of information regarding how we should prepare responses to this issue, such as through developing organized and sophisticated tools enabling the adaptation process. This study aims to build a systematic framework approach to investigate the potentials that Neighborhood Sustainability Assessment tools (NSA) might offer in enabling both the analysis of the emerging adaptive capacity to climate change. The analysis of the framework presented in this paper aims to discuss this issue in three main phases. The first part attempts to link sustainability and climate change, in the context of adaptive capacity. It is argued that in deciding to promote sustainability in the context of climate change, both the resilience and vulnerability processes become central. However, there is still a gap in the current literature regarding how the sustainable development process can respond to climate change. As well as how the resilience of practical strategies might be evaluated. It is suggested that the integration of the sustainability assessment processes with both the resilience thinking process, and vulnerability might provide important components for addressing the adaptive capacity to climate change. A critical review of existing literature is presented illustrating the current lack of work in this field, integrating these three concepts in the context of addressing the adaptive capacity to climate change. The second part aims to identify the most appropriate scale at which to address the built environment for the climate change adaptation. It is suggested that the neighborhood scale can be considered as more suitable than either the building or urban scales. It then presents the example of NSAs, and discusses the need to explore their potential role in promoting the adaptive capacity to climate change. The third part of the framework presents a comparison among three example NSAs, BREEAM Communities, LEED-ND, and CASBEE-UD. These three tools have been selected as the most developed and comprehensive assessment tools that are currently available for the neighborhood scale. This study concludes that NSAs are likely to present the basis for an organized framework to address the practical process for analyzing and yet promoting Adaptive Capacity to Climate Change. It is further argued that vulnerability (exposure & sensitivity) and resilience (Interdependence & Recovery) form essential aspects to be addressed in the future assessment of NSA’s capability to adapt to both short and long term climate change impacts. Finally, it is acknowledged that further work is now required to understand impact assessment in terms of the range of physical sectors (Water, Energy, Transportation, Building, Land Use and Ecosystems), Actor and stakeholder engagement as well as a detailed evaluation of the NSA indicators, together with a barriers diagnosis process.

Keywords: Adaptive capacity, climate change, NSA tools, resilience, vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
80 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: Canny pruning, hand recognition, machine learning, skin tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1241
79 Life Cycle Datasets for the Ornamental Stone Sector

Authors: Isabella Bianco, Gian Andrea Blengini

Abstract:

The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.

Keywords: LCA datasets, life cycle assessment, ornamental stone, stone environmental impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1110
78 Enhancement of Accountability within the South African Public Sector: Knowledge Gained from the Case of a National Commissioner of the South African Police Service

Authors: Yasmin Nanabhay

Abstract:

The paper scrutinizes the literature on accountability and non-accountability, and then presents an analysis of a South African case which demonstrated consequences of a lack of accountability. Ethical conduct displayed by members of the public sector is integral to creating a sustainable democratic government, which upholds the constitutional tenets of accountability, transparency and professional ethicality. Furthermore, a true constitutional democracy emphasises and advocates the notion of service leadership that nurtures public participation and engages with citizens in a positive manner. Ethical conduct and accountability in the public sector earns public trust; hence these are key principles in good governance. Yet, in the years since the advent of democracy in South Africa, the government has been plagued by rampant corruption and mal-administration by public officials and politicians in leadership positions. The control measures passed by government in an attempt to ensure ethicality and accountability within the public sector include codes of ethics, rules of conduct and the enactment of legislation. These are intended to shape the mindset of members of the public sector, with the ultimate aim of an efficient, effective, ethical, responsive and accountable public service. The purpose of the paper is to analyse control systems and accountability within the public sector and to present reasons for non-accountability by means of a selected case study. The selected case study is the corruption trial of Jackie Selebi, who served as National Commissioner of the South African Police Service but was dismissed from the post. The reasons for non-accountability in the public sector as well as recommendations based on the findings to enhance accountability will be undertaken. The case study demonstrates the experience and impact of corruption and/or mal-administration, as a result of a lack of accountability, which has contributed to the increasing loss of confidence in political leadership in the country as elsewhere in the world. The literature is applied to the erstwhile National Commissioner of the South African Police Service and President of Interpol, as a case study of non-accountability.

Keywords: Public sector, public accountability, internal control, oversight mechanisms, non-compliance, corruption, mal-administration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 716
77 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 439
76 Modeling of Alpha-Particles’ Epigenetic Effects in Short-Term Test on Drosophila melanogaster

Authors: Z. M. Biyasheva, M. Zh. Tleubergenova, Y. A. Zaripova, A. L. Shakirov, V. V. Dyachkov

Abstract:

In recent years, interest in ecogenetic and biomedical problems related to the effects on the population of radon and its daughter decay products has increased significantly. Of particular interest is the assessment of the consequence of irradiation at hazardous radon areas, which includes the Almaty region due to the large number of tectonic faults that enhance radon emanation. In connection with the foregoing, the purpose of this work was to study the genetic effects of exposure to supernormal radon doses on the alpha-radiation model. Irradiation does not affect the growth of the cell, but rather its ability to differentiate. In addition, irradiation can lead to somatic mutations, morphoses and modifications. These damages most likely occur from changes in the composition of the substances of the cell. Such changes are epigenetic since they affect the regulatory processes of ontogenesis. Variability in the expression of regulatory genes refers to conditional mutations that modify the formation of signs of intraspecific similarity. Characteristic features of these conditional mutations are the dominant type of their manifestation, phenotypic asymmetry and their instability in the generations. Currently, the terms “morphosis” and “modification” are used to describe epigenetic variability, which are maintained in Drosophila melanogaster cultures using linkaged X- chromosomes, and the mutant X-chromosome is transmitted along the paternal line. In this paper, we investigated the epigenetic effects of alpha particles, whose source in nature is mainly radon and its daughter decay products. In the experiment, an isotope of plutonium-238 (Pu238), generating radiation with an energy of about 5500 eV, was used as a source of alpha particles. In an experiment in the first generation (F1), deformities or morphoses were found, which can be called "radiation syndromes" or mutations, the manifestation of which is similar to the pleiotropic action of genes. The proportion of morphoses in the experiment was 1.8%, and in control 0.4%. In this experiment, the morphoses in the flies of the first and second generation looked like black spots, or melanomas on different parts of the imago body; "generalized" melanomas; curled, curved wings; shortened wing; bubble on one wing; absence of one wing, deformation of thorax, interruption and violation of tergite patterns, disruption of distribution of ocular facets and bristles; absence of pigmentation of the second and third legs. Statistical analysis by the Chi-square method showed the reliability of the difference in experiment and control at P ≤ 0.01. On the basis of this, it can be considered that alpha particles, which in the environment are mainly generated by radon and its isotopes, have a mutagenic effect that manifests itself, mainly in the formation of morphoses or deformities.

Keywords: Alpha-radiation, genotoxicity, morphoses, radioecology, radon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
75 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors

Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson

Abstract:

The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.

Keywords: Cooking, indoor air quality, low-cost sensor, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 968
74 An AI-Generated Semantic Communication Platform in Human-Computer Interaction Course

Authors: Yi Yang, Jiasong Sun

Abstract:

Almost every aspect of our daily lives is now intertwined with some degree of Human-Computer Interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology and more. The HCI courses in the Department of Electronics at Tsinghua University, known as the Media and Cognition course, is constantly updated to reflect the most advanced technological advances, such as virtual reality, augmented reality and artificial intelligence-based interaction. For more than a decade, this course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which has gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. The latest version of the HCI course practices a semantic communication platform based on AI-generated techniques. We explored a student-centered model and proposed an interest-based teaching method. Students are no longer just recipients of knowledge, but become active participants in the learning process driven by personal interests, thereby encouraging students to take responsibility for their own education. One of the latest results of this teaching approach in the course "Media and Cognition" is a student proposal to develop a semantic communication platform rooted in artificial intelligence generative technologies. The platform solves a key challenge in communications technology: the ability to preserve visual signals. The interest-based approach emphasizes personal curiosity and active participation, and the proposal of an artificial intelligence-generated semantic communication platform is an example and successful result of how students can exert greater creativity when they have the power to control their own learning.

Keywords: Human-computer interaction, media and cognition course, semantic communication, retain ability, prompts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24
73 Arginase Enzyme Activity in Human Serum as a Marker of Cognitive Function: The Role of Inositol in Combination with Arginine Silicate

Authors: Katie Emerson, Sara Perez-Ojalvo, Jim Komorowski, Danielle Greenberg

Abstract:

The purpose of this study was to evaluate arginase activity levels in response to combinations of an inositol-stabilized arginine silicate (ASI; Nitrosigine®), L-arginine, and Inositol. Arginine acts as a vasodilator that promotes increased blood flow resulting in enhanced delivery of oxygen and nutrients to the brain and other tissues. Arginase, found in human serum, catalyzes the conversion of arginine to ornithine and urea, completing the last step in the urea cycle. Decreasing arginase levels maintains arginine and results in increased nitric oxide production. This study aimed to determine the most effective combination of ASI, L-arginine and inositol for minimizing arginase levels and therefore maximize ASI’s effect on cognition. Serum was taken from untreated healthy donors by separation from clotted factors. Arginase activity of serum in the presence or absence of test products was determined (QuantiChrom™, DARG-100, Bioassay Systems, Hayward CA). The remaining ultra-filtrated serum units were harvested and used as the source for the arginase enzyme. ASI alone or combined with varied levels of Inositol were tested as follows: ASI + inositol at 0.25 g, 0.5 g, 0.75 g, or 1.00 g. L-arginine was also tested as a positive control. All tests elicited changes in arginase activity demonstrating the efficacy of the method used. Adding L-arginine to serum from untreated subjects, with or without inositol only had a mild effect. Adding inositol at all levels reduced arginase activity. Adding 0.5 g to the standardized amount of ASI led to the lowest amount of arginase activity as compared to the 0.25 g, 0.75 g or 1.00g doses of inositol or to L-arginine alone. The outcome of this study demonstrates an interaction of the pairing of inositol with ASI on the activity of the enzyme arginase. We found that neither the maximum nor minimum amount of inositol tested in this study led to maximal arginase inhibition. Since the inhibition of arginase activity is desirable for product formulations looking to maintain arginine levels, the most effective amount of inositol was deemed preferred. Subsequent studies suggest this moderate level of inositol in combination with ASI leads to cognitive improvements including reaction time, executive function, and concentration.

Keywords: Arginine, blood flow, colorimetry, urea cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 392
72 Comparison of Power Generation Status of Photovoltaic Systems under Different Weather Conditions

Authors: Zhaojun Wang, Zongdi Sun, Qinqin Cui, Xingwan Ren

Abstract:

Based on multivariate statistical analysis theory, this paper uses the principal component analysis method, Mahalanobis distance analysis method and fitting method to establish the photovoltaic health model to evaluate the health of photovoltaic panels. First of all, according to weather conditions, the photovoltaic panel variable data are classified into five categories: sunny, cloudy, rainy, foggy, overcast. The health of photovoltaic panels in these five types of weather is studied. Secondly, a scatterplot of the relationship between the amount of electricity produced by each kind of weather and other variables was plotted. It was found that the amount of electricity generated by photovoltaic panels has a significant nonlinear relationship with time. The fitting method was used to fit the relationship between the amount of weather generated and the time, and the nonlinear equation was obtained. Then, using the principal component analysis method to analyze the independent variables under five kinds of weather conditions, according to the Kaiser-Meyer-Olkin test, it was found that three types of weather such as overcast, foggy, and sunny meet the conditions for factor analysis, while cloudy and rainy weather do not satisfy the conditions for factor analysis. Therefore, through the principal component analysis method, the main components of overcast weather are temperature, AQI, and pm2.5. The main component of foggy weather is temperature, and the main components of sunny weather are temperature, AQI, and pm2.5. Cloudy and rainy weather require analysis of all of their variables, namely temperature, AQI, pm2.5, solar radiation intensity and time. Finally, taking the variable values in sunny weather as observed values, taking the main components of cloudy, foggy, overcast and rainy weather as sample data, the Mahalanobis distances between observed value and these sample values are obtained. A comparative analysis was carried out to compare the degree of deviation of the Mahalanobis distance to determine the health of the photovoltaic panels under different weather conditions. It was found that the weather conditions in which the Mahalanobis distance fluctuations ranged from small to large were: foggy, cloudy, overcast and rainy.

Keywords: Fitting, principal component analysis, Mahalanobis distance, SPSS, MATLAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
71 Security Model of a Unified Communications and Integrated Collaborations System in the Health Sector Environment of Developing Countries: A Case of Uganda

Authors: Excellence Favor, Bakari M. M. Mwinyiwiwa

Abstract:

Access to information holds the key to the empowerment of everybody despite where they are living. This research has been carried out in respect of the people living in developing countries, considering their plight and complex geographical, demographic, social-economic conditions surrounding the areas they live, which hinder access to information and of professionals providing services such as medical workers, which has led to high death rates and development stagnation. Research on Unified Communications and Integrated Collaborations (UCIC) system in the health sector of developing countries aims at creating a possible solution of bridging the digital canyon among the communities. The system is meant to deliver services in a seamless manner to assist health workers situated anywhere to be accessed easily and access information which will enhance service delivery. The proposed UCIC provides the most immersive telepresence experience for one-to-one or many-to-many meetings. Extending to locations anywhere in the world, the transformative platform delivers Ultra-low operating costs through the use of general purpose networks and using special lenses and track systems. The essence of this study is to create a security model for the deployment of the UCIC system in the health sector of developing countries. The model approach used for building the UCIC system security carefully considers the specific requirements for the health sector environment organization such as data centre, national, regional and district hospitals, and health centers IV, III, II and I and then builds the single best possible secure network to meet their needs. The security model demonstrates on how the components of the UCIC system will be protected physically and logically in the health sector environment. The UCIC system once adopted and implemented correctly will bring enhancement to the speed and quality of services offered by health workers. The capacities of UCIC will help health workers shorten decision cycles, accelerate service delivery and save lives by speeding access to information and by making it possible for all health workers and patients to collaborate ubiquitously.

Keywords: Developing Countries, Health Sector Environment, Security, Unified Communications and Integrated Collaborations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
70 Study of Unsteady Behaviour of Dynamic Shock Systems in Supersonic Engine Intakes

Authors: Siddharth Ahuja, T. M. Muruganandam

Abstract:

An analytical investigation is performed to study the unsteady response of a one-dimensional, non-linear dynamic shock system to external downstream pressure perturbations in a supersonic flow in a varying area duct. For a given pressure ratio across a wind tunnel, the normal shock's location can be computed as per one-dimensional steady gas dynamics. Similarly, for some other pressure ratio, the location of the normal shock will change accordingly, again computed using one-dimensional gas dynamics. This investigation focuses on the small-time interval between the first steady shock location and the new steady shock location (corresponding to different pressure ratios). In essence, this study aims to shed light on the motion of the shock from one steady location to another steady location. Further, this study aims to create the foundation of the Unsteady Gas Dynamics field enabling further insight in future research work. According to the new pressure ratio, a pressure pulse, generated at the exit of the tunnel which travels and perturbs the shock from its original position, setting it into motion. During such activity, other numerous physical phenomena also happen at the same time. However, three broad phenomena have been focused on, in this study - Traversal of a Wave, Fluid Element Interactions and Wave Interactions. The above mentioned three phenomena create, alter and kill numerous waves for different conditions. The waves which are created by the above-mentioned phenomena eventually interact with the shock and set it into motion. Numerous such interactions with the shock will slowly make it settle into its final position owing to the new pressure ratio across the duct, as estimated by one-dimensional gas dynamics. This analysis will be extremely helpful in the prediction of inlet 'unstart' of the flow in a supersonic engine intake and its prominence with the incoming flow Mach number, incoming flow pressure and the external perturbation pressure is also studied to help design more efficient supersonic intakes for engines like ramjets and scramjets.

Keywords: Analytical investigation, compression and expansion waves, fluid element interactions, shock trajectory, supersonic flow, unsteady gas dynamics, varying area duct, wave interactions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841
69 Understanding Help Seeking among Black Women with Clinically Significant Posttraumatic Stress Symptoms

Authors: Glenda Wrenn, Juliet Muzere, Meldra Hall, Allyson Belton, Kisha Holden, Chanita Hughes-Halbert, Martha Kent, Bekh Bradley

Abstract:

Understanding the help seeking decision making process and experiences of health disparity populations with posttraumatic stress disorder (PTSD) is central to development of trauma-informed, culturally centered, and patient focused services. Yet, little is known about the decision making process among adult Black women who are non-treatment seekers as they are, by definition, not engaged in services. Methods: Audiotaped interviews were conducted with 30 African American adult women with clinically significant PTSD symptoms who were engaged in primary care, but not in treatment for PTSD despite symptom burden. A qualitative interview guide was used to elucidate key themes. Independent coding of themes mapped to theory and identification of emergent themes were conducted using qualitative methods. An existing quantitative dataset was analyzed to contextualize responses and provide a descriptive summary of the sample. Results: Emergent themes revealed that active mental avoidance, the intermittent nature of distress, ambivalence, and self-identified resilience as undermining to help seeking decisions. Participants were stuck within the help-seeking phase of ‘recognition’ of illness and retained a sense of “it is my decision” despite endorsing significant social and environmental negative influencers. Participants distinguished ‘help acceptance’ from ‘help seeking’ with greater willingness to accept help and importance placed on being of help to others. Conclusions: Elucidation of the decision-making process from the perspective of non-treatment seekers has implications for outreach and treatment within models of integrated and specialty systems care. The salience of responses to trauma symptoms and stagnation in the help seeking recognition phase are findings relevant to integrated care service design and community engagement.

Keywords: Culture, help-seeking, integrated care, PTSD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071
68 A Multi-Level WEB Based Parallel Processing System A Hierarchical Volunteer Computing Approach

Authors: Abdelrahman Ahmed Mohamed Osman

Abstract:

Over the past few years, a number of efforts have been exerted to build parallel processing systems that utilize the idle power of LAN-s and PC-s available in many homes and corporations. The main advantage of these approaches is that they provide cheap parallel processing environments for those who cannot afford the expenses of supercomputers and parallel processing hardware. However, most of the solutions provided are not very flexible in the use of available resources and very difficult to install and setup. In this paper, a multi-level web-based parallel processing system (MWPS) is designed (appendix). MWPS is based on the idea of volunteer computing, very flexible, easy to setup and easy to use. MWPS allows three types of subscribers: simple volunteers (single computers), super volunteers (full networks) and end users. All of these entities are coordinated transparently through a secure web site. Volunteer nodes provide the required processing power needed by the system end users. There is no limit on the number of volunteer nodes, and accordingly the system can grow indefinitely. Both volunteer and system users must register and subscribe. Once, they subscribe, each entity is provided with the appropriate MWPS components. These components are very easy to install. Super volunteer nodes are provided with special components that make it possible to delegate some of the load to their inner nodes. These inner nodes may also delegate some of the load to some other lower level inner nodes .... and so on. It is the responsibility of the parent super nodes to coordinate the delegation process and deliver the results back to the user. MWPS uses a simple behavior-based scheduler that takes into consideration the current load and previous behavior of processing nodes. Nodes that fulfill their contracts within the expected time get a high degree of trust. Nodes that fail to satisfy their contract get a lower degree of trust. MWPS is based on the .NET framework and provides the minimal level of security expected in distributed processing environments. Users and processing nodes are fully authenticated. Communications and messages between nodes are very secure. The system has been implemented using C#. MWPS may be used by any group of people or companies to establish a parallel processing or grid environment.

Keywords: Volunteer computing, Parallel Processing, XMLWebServices, .NET Remoting, Tuplespace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
67 Investigation of New Method to Achieve Well Dispersed Multiwall Carbon Nanotubes Reinforced Al Matrix Composites

Authors: A.H.Javadi, Sh.Mirdamadi, M.A.Faghisani, S.Shakhesi

Abstract:

Nanostructured materials have attracted many researchers due to their outstanding mechanical and physical properties. For example, carbon nanotubes (CNTs) or carbon nanofibres (CNFs) are considered to be attractive reinforcement materials for light weight and high strength metal matrix composites. These composites are being projected for use in structural applications for their high specific strength as well as functional materials for their exciting thermal and electrical characteristics. The critical issues of CNT-reinforced MMCs include processing techniques, nanotube dispersion, interface, strengthening mechanisms and mechanical properties. One of the major obstacles to the effective use of carbon nanotubes as reinforcements in metal matrix composites is their agglomeration and poor distribution/dispersion within the metallic matrix. In order to tap into the advantages of the properties of CNTs (or CNFs) in composites, the high dispersion of CNTs (or CNFs) and strong interfacial bonding are the key issues which are still challenging. Processing techniques used for synthesis of the composites have been studied with an objective to achieve homogeneous distribution of carbon nanotubes in the matrix. Modified mechanical alloying (ball milling) techniques have emerged as promising routes for the fabrication of carbon nanotube (CNT) reinforced metal matrix composites. In order to obtain a homogeneous product, good control of the milling process, in particular control of the ball movement, is essential. The control of the ball motion during the milling leads to a reduction in grinding energy and a more homogeneous product. Also, the critical inner diameter of the milling container at a particular rotational speed can be calculated. In the present work, we use conventional and modified mechanical alloying to generate a homogenous distribution of 2 wt. % CNT within Al powders. 99% purity Aluminium powder (Acros, 200mesh) was used along with two different types of multiwall carbon nanotube (MWCNTs) having different aspect ratios to produce Al-CNT composites. The composite powders were processed into bulk material by compaction, and sintering using a cylindrical compaction and tube furnace. Field Emission Scanning electron microscopy (FESEM), X-Ray diffraction (XRD), Raman spectroscopy and Vickers macro hardness tester were used to evaluate CNT dispersion, powder morphology, CNT damage, phase analysis, mechanical properties and crystal size determination. Despite the success of ball milling in dispersing CNTs in Al powder, it is often accompanied with considerable strain hardening of the Al powder, which may have implications on the final properties of the composite. The results show that particle size and morphology vary with milling time. Also, by using the mixing process and sonication before mechanical alloying and modified ball mill, dispersion of the CNTs in Al matrix improves.

Keywords: multiwall carbon nanotube, Aluminum matrixcomposite, dispersion, mechanical alloying, sintering

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285
66 Modelling Forest Fire Risk in the Goaso Forest Area of Ghana: Remote Sensing and Geographic Information Systems Approach

Authors: Bernard Kumi-Boateng, Issaka Yakubu

Abstract:

Forest fire, which is, an uncontrolled fire occurring in nature has become a major concern for the Forestry Commission of Ghana (FCG). The forest fires in Ghana usually result in massive destruction and take a long time for the firefighting crews to gain control over the situation. In order to assess the effect of forest fire at local scale, it is important to consider the role fire plays in vegetation composition, biodiversity, soil erosion, and the hydrological cycle. The occurrence, frequency and behaviour of forest fires vary over time and space, primarily as a result of the complicated influences of changes in land use, vegetation composition, fire suppression efforts, and other indigenous factors. One of the forest zones in Ghana with a high level of vegetation stress is the Goaso forest area. The area has experienced changes in its traditional land use such as hunting, charcoal production, inefficient logging practices and rural abandonment patterns. These factors which were identified as major causes of forest fire, have recently modified the incidence of fire in the Goaso area. In spite of the incidence of forest fires in the Goaso forest area, most of the forest services do not provide a cartographic representation of the burned areas. This has resulted in significant amount of information being required by the firefighting unit of the FCG to understand fire risk factors and its spatial effects. This study uses Remote Sensing and Geographic Information System techniques to develop a fire risk hazard model using the Goaso Forest Area (GFA) as a case study. From the results of the study, natural forest, agricultural lands and plantation cover types were identified as the major fuel contributing loads. However, water bodies, roads and settlements were identified as minor fuel contributing loads. Based on the major and minor fuel contributing loads, a forest fire risk hazard model with a reasonable accuracy has been developed for the GFA to assist decision making.

Keywords: Forest risk, GIS, remote sensing, Goaso.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
65 Streamwise Vorticity in the Wake of a Sliding Bubble

Authors: R. O’Reilly Meehan, D. B. Murray

Abstract:

In many practical situations, bubbles are dispersed in a liquid phase. Understanding these complex bubbly flows is therefore a key issue for applications such as shell and tube heat exchangers, mineral flotation and oxidation in water treatment. Although a large body of work exists for bubbles rising in an unbounded medium, that of bubbles rising in constricted geometries has received less attention. The particular case of a bubble sliding underneath an inclined surface is common to two-phase flow systems. The current study intends to expand this knowledge by performing experiments to quantify the streamwise flow structures associated with a single sliding air bubble under an inclined surface in quiescent water. This is achieved by means of two-dimensional, two-component particle image velocimetry (PIV), performed with a continuous wave laser and high-speed camera. PIV vorticity fields obtained in a plane perpendicular to the sliding surface show that there is significant bulk fluid motion away from the surface. The associated momentum of the bubble means that this wake motion persists for a significant time before viscous dissipation. The magnitude and direction of the flow structures in the streamwise measurement plane are found to depend on the point on its path through which the bubble enters the plane. This entry point, represented by a phase angle, affects the nature and strength of the vortical structures. This study reconstructs the vorticity field in the wake of the bubble, converting the field at different instances in time to slices of a large-scale wake structure. This is, in essence, Taylor’s ”frozen turbulence” hypothesis. Applying this to the vorticity fields provides a pseudo three-dimensional representation from 2-D data, allowing for a more intuitive understanding of the bubble wake. This study provides insights into the complex dynamics of a situation common to many engineering applications, particularly shell and tube heat exchangers in the nucleate boiling regime.

Keywords: Bubbly flow, particle image velocimetry, two-phase flow, wake structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
64 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder

Abstract:

One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.

Keywords: Affective computing, emotion recognition, humanoid robot, Human-Robot-Interaction (HRI), social robots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1306
63 Formulation and in vitro Evaluation of Ondansetron Hydrochloride Matrix Transdermal Systems Using Ethyl Cellulose/Polyvinyl Pyrrolidone Polymer Blends

Authors: Rajan Rajabalaya, Li-Qun Tor, Sheba David

Abstract:

Transdermal delivery of ondansetron hydrochloride (OdHCl) can prevent the problems encountered with oral ondansetron. In previously conducted studies, effect of amount of polyvinyl pyrrolidone, permeation enhancer and casting solvent on the physicochemical properties on OdHCl were investigated. It is feasible to develop ondansetron transdermal patch by using ethyl cellulose and polyvinyl pyrrolidone with dibutyl pthalate as plasticizer, however, the desired flux is not achieved. The primary aim of this study is to use dimethyl succinate (DMS) and propylene glycol that are not incorporated in previous studies to determine their effect on the physicochemical properties of an OdHCl transdermal patch using ethyl cellulose and polyvinyl pyrrolidone. This study also investigates the effect of permeation enhancer (eugenol and phosphatidylcholine) on the release of OdHCl. The results showed that propylene glycol is a more suitable plasticizer compared to DMS in the fabrication of OdHCl transdermal patch using ethyl cellulose and polyvinyl pyrrolidone as polymers. Propylene glycol containing patch has optimum drug content, thickness, moisture content and water absorption, tensile strength, and a better release profile than DMS. Eugenol and phosphatidylcholine can increase release of OdHCl from the patches. From the physicochemical result and permeation profile, a combination of 350mg of ethyl cellulose, 150mg polyvinyl pyrrolidone, 3% of total polymer weight of eugenol, and 40% of total polymer weight of propylene glycol is the most suitable formulation to develop an OdHCl patch. OdHCl release did not increase with increasing the percentage of plasticiser. DMS 4, PG 4, DMS 9, PG 9, DMS 14, and PG 14 gave better release profiles where using 300mg: 0mg, 300mg: 100mg, and 350mg: 150mg of EC: PVP. Thus, 40% of PG or DMS appeared to be the optimum amount of plasticiser when the above combination where EC: PVP was used. It was concluded from the study that a patch formulation containing 350mg EC, 150mg PVP, 40% PG and 3% eugenol is the best transdermal matrix patch compositions for the uniform and continuous release/permeation of OdHCl over an extended period. This patch design can be used for further pharmacokinetic and pharmacodynamic studies in suitable animal models.

Keywords: Ondansetron hydrochloride, dimethyl succinate, eugenol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2422
62 Culture Sustainability in Contemporary Vernacular Architecture: Case Study of Muscat International Airport

Authors: S. Hegazy

Abstract:

Culture sustainability, which reflects a deep respect for people and history, is a cause of concern in contemporary architecture. Adopting ultramodern architecture styles was initiated in the 20th century by a plurality of states worldwide. Only a few countries, including Oman, realized that fashionable architectural designs ignore cultural values, identity, the context of its environment, economic perspective, and social performance. Stirring the Sultanate of Oman from being a listless and closed community to a modern country started in the year 1970. Despite unprecedented development in all aspects of Omani people's life, the leadership and the public had the capability to adjust to the changing global challenges without compromising social values and identity. This research provides a close analysis of one of the recent examples of contemporary vernacular architecture in the Sultanate of Oman, as a case study, Oman International Airport. The airport gained an international appreciation for its Omani-themed architecture, distinguished traveler experience, and advanced technology. Accordingly, it was selected by the World Travel Awards as the Best Tourism Development Project in the Middle East only four weeks afterward after starting its operation. This paper aims to transfer this successful design approach of integrating the latest trends in technology, systems, eco-friendly aspects, and materials with the traditional Omani architectural features, which reflects symbiotic harmony of the community, individuals, and environment to other countries, designers, researchers, and students. In addition, the paper aims to encourage architects and teachers to take responsibility for valorizing-built heritage as a source of inspiration for modern architecture, which could be considered as an added value. The work depends on reviewing the relevant literature, a case study, interviews with two architects who were involved in the project’s site work, and one current high-ranking employee in the airport besides data analysis and conclusion.

Keywords: Contemporary vernacular architecture, culture sustainability, Oman international airport, current Omani architecture type.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175
61 The Effect of Simulated Acid Rain on Glycine max

Authors: Nilima Gajbhiye

Abstract:

Acid rain occurs when sulphur dioxide (SO2) and nitrogen oxides (Nox) gases react in the atmosphere with water, oxygen, and other chemicals to form various acidic compounds. The result is a mild solution of sulfuric acid and nitric acid. Soil has a greater buffering capacity than aquatic systems. However excessive amount of acids introduced by acid rains may disturb the entire soil chemistry. Acidity and harmful action of toxic elements damage vegetation while susceptible microbial species are eliminated. In present study, the effects of simulated sulphuric acid and nitric acid rains were investigated on crop Glycine max. The effect of acid rain on change in soil fertility was detected in which pH of control sample was 6.5 and pH of 1%H2SO4 and 1%HNO3 were 3.5. Nitrogen nitrate in soil was high in 1% HNO3 treated soil & Control sample. Ammonium nitrogen in soil was low in 1% HNO3 & H2SO4 treated soil. Ammonium nitrogen was medium in control and other samples. The effect of acid rain on seed germination on 3rd day of germination control sample growth was 7 cm, 0.1% HNO3 was 8cm, and 0.001% HNO3 & 0.001% H2SO4 was 6cm each. On 10th day fungal growth was observed in 1% and 0.1%H2SO4 concentrations, when all plants were dead. The effect of acid rain on crop productivity was investigated on 3rd day roots were developed in plants. On12th day Glycine max showed more growth in 0.1% HNO3, 0.001% HNO3 and 0.001% H2SO4 treated plants growth were same as compare to control plants. On 20th day development of discoloration of plant pigments were observed on acid treated plants leaves. On 38th day, 0.1, 0.001% HNO3 and 0.1, 0.001% H2SO4 treated plants and control plants were showing flower growth. On 42th day, acid treated Glycine max variety and control plants were showed seeds on plants. In Glycine max variety 0.1, 0.001% H2SO4, 0.1, 0.001% HNO3 treated plants were dead on 46th day and fungal growth was observed. The toxicological study was carried out on Glycine max plants exposed to 1% HNO3 cells were damaged more than 1% H2SO4. Leaf sections exposed to 0.001% HNO3 & H2SO4 showed less damaged of cells and pigmentation observed in entire slide when compare with control plant. The soil analysis was done to find microorganisms in HNO3 & H2SO4 treated Glycine max and control plants. No microorganism growth was observed in 1% HNO3 & H2SO4 but control plant showed microbial growth.

Keywords: Acid rain, Glycine max, HNO3 & H2SO4, Pigmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3382
60 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases

Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner

Abstract:

Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.

Keywords: Aerosol processes, Brownian coagulation, gravitational settling, transport regulations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
59 Closing the Loop between Building Sustainability and Stakeholder Engagement: Case Study of an Australian University

Authors: Karishma Kashyap, Subha D. Parida

Abstract:

Rapid population growth and urbanization is creating pressure throughout the world. This has a dramatic effect on a lot of elements which include water, food, transportation, energy, infrastructure etc. as few of the key services. Built environment sector is growing concurrently to meet the needs of urbanization. Due to such large scale development of buildings, there is a need for them to be monitored and managed efficiently. Along with appropriate management, climate adaptation is highly crucial as well because buildings are one of the major sources of greenhouse gas emission in their operation phase. Buildings to be adaptive need to provide a triple bottom approach to sustainability i.e., being socially, environmentally and economically sustainable. Hence, in order to deliver these sustainability outcomes, there is a growing understanding and thrive towards switching to green buildings or renovating new ones as per green standards wherever possible. Academic institutions in particular have been following this trend globally. This is highly significant as universities usually have high occupancy rates because they manage a large building portfolio. Also, as universities accommodate the future generation of architects, policy makers etc., they have the potential of setting themselves as a best industry practice model for research and innovation for the rest to follow. Hence their climate adaptation, sustainable growth and performance management becomes highly crucial in order to provide the best services to users. With the objective of evaluating appropriate management mechanisms within academic institutions, a feasibility study was carried out in a recent 5-Star Green Star rated university building (housing the School of Construction) in Victoria (south-eastern state of Australia). The key aim was to understand the behavioral and social aspect of the building users, management and the impact of their relationship on overall building sustainability. A survey was used to understand the building occupant’s response and reactions in terms of their work environment and management. A report was generated based on the survey results complemented with utility and performance data which were then used to evaluate the management structure of the university. Followed by the report, interviews were scheduled with the facility and asset managers in order to understand the approach they use to manage the different buildings in their university campuses (old, new, refurbished), respective building and parameters incorporated in maintaining the Green Star performance. The results aimed at closing the communication and feedback loop within the respective institutions and assist the facility managers to deliver appropriate stakeholder engagement. For the wider design community, analysis of the data highlights the applicability and significance of prioritizing key stakeholders, integrating desired engagement policies within an institution’s management structures and frameworks and their effect on building performance

Keywords: Building Optimization, Green Building, Post Occupancy Evaluation, Stakeholder Engagement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 946
58 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: Characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405
57 Development of Requirements Analysis Tool for Medical Autonomy in Long-Duration Space Exploration Missions

Authors: Lara Dutil-Fafard, Caroline Rhéaume, Patrick Archambault, Daniel Lafond, Neal W. Pollock

Abstract:

Improving resources for medical autonomy of astronauts in prolonged space missions, such as a Mars mission, requires not only technology development, but also decision-making support systems. The Advanced Crew Medical System - Medical Condition Requirements study, funded by the Canadian Space Agency, aimed to create knowledge content and a scenario-based query capability to support medical autonomy of astronauts. The key objective of this study was to create a prototype tool for identifying medical infrastructure requirements in terms of medical knowledge, skills and materials. A multicriteria decision-making method was used to prioritize the highest risk medical events anticipated in a long-term space mission. Starting with those medical conditions, event sequence diagrams (ESDs) were created in the form of decision trees where the entry point is the diagnosis and the end points are the predicted outcomes (full recovery, partial recovery, or death/severe incapacitation). The ESD formalism was adapted to characterize and compare possible outcomes of medical conditions as a function of available medical knowledge, skills, and supplies in a given mission scenario. An extensive literature review was performed and summarized in a medical condition database. A PostgreSQL relational database was created to allow query-based evaluation of health outcome metrics with different medical infrastructure scenarios. Critical decision points, skill and medical supply requirements, and probable health outcomes were compared across chosen scenarios. The three medical conditions with the highest risk rank were acute coronary syndrome, sepsis, and stroke. Our efforts demonstrate the utility of this approach and provide insight into the effort required to develop appropriate content for the range of medical conditions that may arise.

Keywords: Decision support system, event sequence diagram, exploration mission, medical autonomy, scenario-based queries, space medicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 978
56 Florida’s Groundwater and Surface Water System Reliability in Terms of Climate Change and Sea-Level Rise

Authors: Rahman Davtalab, Saba Ghotbi

Abstract:

Florida is one of the most vulnerable states to natural disasters among the 50 states of the USA. The state exposed by tropical storms, hurricanes, storm surge, landslide, etc. Besides the mentioned natural phenomena, global warming, sea-level rise, and other anthropogenic environmental changes make a very complicated and unpredictable system for decision-makers. In this study, we tried to highlight the effects of climate change and sea-level rise on surface water and groundwater systems for three different geographical locations in Florida; Main Canal of Jacksonville Beach in the northeast of Florida adjacent to the Atlantic Ocean, Grace Lake in central Florida, far away from surrounded coastal line, and Mc Dill in Florida and adjacent to Tampa Bay and Mexican Gulf. An integrated hydrologic and hydraulic model was developed and simulated for all three cases, including surface water, groundwater, or a combination of both. For the case study of Main Canal-Jacksonville Beach, the investigation showed that a 76 cm sea-level rise in time horizon 2060 could increase the flow velocity of the tide cycle for the main canal's outlet and headwater. This case also revealed how the sea level rise could change the tide duration, potentially affecting the coastal ecosystem. As expected, sea-level rise can raise the groundwater level. Therefore, for the Mc Dill case, the effect of groundwater rise on soil storage and the performance of stormwater retention ponds is investigated. The study showed that sea-level rise increased the pond’s seasonal high water up to 40 cm by time horizon 2060. The reliability of the retention pond is dropped from 99% for the current condition to 54% for the future. The results also proved that the retention pond could not retain and infiltrate the designed treatment volume within 72 hours, which is a significant indication of increasing pollutants in the future. Grace Lake case study investigates the effects of climate change on groundwater recharge. This study showed that using the dynamically downscaled data of the groundwater recharge can decline up to 24 % by the mid-21st century. 

Keywords: groundwater, surface water, Florida, retention pond, tide, sea-level rise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 520
55 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform

Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park

Abstract:

Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.

Keywords: Target localization, ship-borne electro-optical stabilized platform, unscented Kalman filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
54 Impact of Mixing Parameters on Homogenization of Borax Solution and Nucleation Rate in Dual Radial Impeller Crystallizer

Authors: A. Kaćunić, M. Ćosić, N. Kuzmanić

Abstract:

Interaction between mixing and crystallization is often ignored despite the fact that it affects almost every aspect of the operation including nucleation, growth, and maintenance of the crystal slurry. This is especially pronounced in multiple impeller systems where flow complexity is increased. By choosing proper mixing parameters, what closely depends on the knowledge of the hydrodynamics in a mixing vessel, the process of batch cooling crystallization may considerably be improved. The values that render useful information when making this choice are mixing time and power consumption. The predominant motivation for this work was to investigate the extent to which radial dual impeller configuration influences mixing time, power consumption and consequently the values of metastable zone width and nucleation rate. In this research, crystallization of borax was conducted in a 15 dm3 baffled batch cooling crystallizer with an aspect ratio (H/T) of 1.3. Mixing was performed using two straight blade turbines (4-SBT) mounted on the same shaft that generated radial fluid flow. Experiments were conducted at different values of N/NJS ratio (impeller speed/ minimum impeller speed for complete suspension), D/T ratio (impeller diameter/crystallizer diameter), c/D ratio (lower impeller off-bottom clearance/impeller diameter), and s/D ratio (spacing between impellers/impeller diameter). Mother liquor was saturated at 30°C and was cooled at the rate of 6°C/h. Its concentration was monitored in line by Na-ion selective electrode. From the values of supersaturation that was monitored continuously over process time, it was possible to determine the metastable zone width and subsequently the nucleation rate using the Mersmann’s nucleation criterion. For all applied dual impeller configurations, the mixing time was determined by potentiometric method using a pulse technique, while the power consumption was determined using a torque meter produced by Himmelstein & Co. Results obtained in this investigation show that dual impeller configuration significantly influences the values of mixing time, power consumption as well as the metastable zone width and nucleation rate. A special attention should be addressed to the impeller spacing considering the flow interaction that could be more or less pronounced depending on the spacing value.

Keywords: Dual impeller crystallizer, mixing time, power consumption, metastable zone width, nucleation rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
53 Modelling and Control of Milk Fermentation Process in Biochemical Reactor

Authors: Jožef Ritonja

Abstract:

The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.

Keywords: Bioprocess engineering, biochemical reactor, fermentation process, modeling, adaptive control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
52 Financial Regulations in the Process of Global Financial Crisis and Macroeconomics Impact of Basel III

Authors: M. Okan Tasar

Abstract:

Basel III (or the Third Basel Accord) is a global regulatory standard on bank capital adequacy, stress testing and market liquidity risk agreed upon by the members of the Basel Committee on Banking Supervision in 2010-2011, and scheduled to be introduced from 2013 until 2018. Basel III is a comprehensive set of reform measures. These measures aim to; (1) improve the banking sector-s ability to absorb shocks arising from financial and economic stress, whatever the source, (2) improve risk management and governance, (3) strengthen banks- transparency and disclosures. Similarly the reform target; (1) bank level or micro-prudential, regulation, which will help raise the resilience of individual banking institutions to periods of stress. (2) Macro-prudential regulations, system wide risk that can build up across the banking sector as well as the pro-cyclical implication of these risks over time. These two approaches to supervision are complementary as greater resilience at the individual bank level reduces the risk system wide shocks. Macroeconomic impact of Basel III; OECD estimates that the medium-term impact of Basel III implementation on GDP growth is in the range -0,05 percent to -0,15 percent per year. On the other hand economic output is mainly affected by an increase in bank lending spreads as banks pass a rise in banking funding costs, due to higher capital requirements, to their customers. Consequently the estimated effects on GDP growth assume no active response from monetary policy. Basel III impact on economic output could be offset by a reduction (or delayed increase) in monetary policy rates by about 30 to 80 basis points. The aim of this paper is to create a framework based on the recent regulations in order to prevent financial crises. Thus the need to overcome the global financial crisis will contribute to financial crises that may occur in the future periods. In the first part of the paper, the effects of the global crisis on the banking system examine the concept of financial regulations. In the second part; especially in the financial regulations and Basel III are analyzed. The last section in this paper explored the possible consequences of the macroeconomic impacts of Basel III.

Keywords: Banking Systems, Basel III, Financial regulation, Global Financial Crisis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241