Search results for: input constraints
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3348

Search results for: input constraints

978 Performance Evaluation of a Small Microturbine Cogeneration Functional Model

Authors: Jeni A. Popescu, Sorin G. Tomescu, Valeriu A. Vilag

Abstract:

The paper focuses on the potential methods of increasing the performance of a microturbine by combining additional elements available for utilization in a cogeneration plant. The activity is carried out within the framework of a project aiming to develop, manufacture and test a microturbine functional model with high potential in energetic industry utilization. The main goal of the analysis is to determine the parameters of the fluid flow passing through each section of the turbine, based on limited data available in literature for the focus output power range or provided by experimental studies, starting from a reference cycle, and considering different cycle options, including simple, intercooled and recuperated options, in order to optimize a small cogeneration plant operation. The studied configurations operate under the same initial thermodynamic conditions and are based on a series of assumptions, in terms of individual performance of the components, pressure/velocity losses, compression ratios, and efficiencies. The thermodynamic analysis evaluates the expected performance of the microturbine cycle, while providing a series of input data and limitations to be included in the development of the experimental plan. To simplify the calculations and to allow a clear estimation of the effect of heat transfer between fluids, the working fluid for all the thermodynamic evolutions is, initially, air, the combustion being modelled by simple heat addition to the system. The theoretical results, along with preliminary experimental results are presented, aiming for a correlation in terms of microturbine performance.

Keywords: cogeneration, microturbine, performance, thermodynamic analysis

Procedia PDF Downloads 169
977 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 130
976 Knowledge, Perceptions, and Barriers of Preconception Care among Healthcare Workers in Nigeria

Authors: Taiwo Hassanat Bawa-Muhammad, Opeoluwa Hope Adegoke

Abstract:

Introduction: This study aims to examine the knowledge and perceptions of preconception care among healthcare workers in Nigeria, recognizing its crucial role in ensuring safe pregnancies. Despite its significance, awareness of preconception care remains low in the country. The study seeks to assess the understanding of preconception services and identify the barriers that hinder their efficacy. Methods: Through semi-structured interviews, 129 healthcare workers across six states in Nigeria were interviewed between January and March 2023. The interviews explored the healthcare workers' knowledge of preconception care practices, the socio-cultural influences shaping decision-making, and the challenges that limit accessibility and utilization of preconception care services. Results: The findings reveal a limited knowledge of preconception care among healthcare workers, primarily due to inadequate information dissemination within the healthcare system. Additionally, cultural beliefs significantly influence perceptions surrounding preconception care. Furthermore, financial constraints, distance to healthcare facilities, and poor health infrastructure disproportionately restrict access to preconception services, particularly for vulnerable populations. The study also highlights insufficient skills and outdated training among healthcare workers regarding preconception guidance, primarily attributed to limited opportunities for professional development. Discussion: To improve preconception care in Nigeria, comprehensive education programs must be implemented, taking into account the societal influences that shape perceptions and behaviors. These programs should aim to dispel myths and promote evidence-based practices. Additionally, training healthcare workers and integrating preconception care services into primary care settings, with support from religious and community leaders, can help overcome barriers to access. Strategies should prioritize affordability while emphasizing the broader benefits of preconception care beyond fertility concerns alone. Lastly, widespread literacy campaigns utilizing trusted channels are crucial for effectively disseminating information and promoting the adoption of preconception practices in Nigeria.

Keywords: preconception care, knowledge, healthcare workers, Nigeria, barriers, education, training

Procedia PDF Downloads 97
975 Vibration Analysis and Optimization Design of Ultrasonic Horn

Authors: Kuen Ming Shu, Ren Kai Ho

Abstract:

Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.

Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration

Procedia PDF Downloads 116
974 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 309
973 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.

Keywords: deep learning, artificial neural networks, energy price forecasting, turkey

Procedia PDF Downloads 292
972 Korean Smart Cities: Strategic Foci, Characteristics and Effects

Authors: Sang Ho Lee, Yountaik Leem

Abstract:

This paper reviews Korean cases of smart cities through the analysis framework of strategic foci, characteristics and effects. Firstly, national strategies including c(cyber), e(electronic), u(ubiquitous) and s(smart) Korea strategies were considered from strategic angles. Secondly, the characteristics of smart cities in Korea were looked through the smart cities examples such as Seoul, Busan, Songdo and Sejong cities etc. from the views on the by STIM (Service, Technology, Infrastructure and Management) analysis. Finally, the effects of smart cities on socio-economies were investigated from industrial perspective using the input-output model and structural path analysis. Korean smart city strategies revealed that there were different kinds of strategic foci. c-Korea strategy focused on information and communications network building and user IT literacy. e-Korea strategy encouraged e-government and e-business through utilizing high-speed information and communications network. u-Korea strategy made ubiquitous service as well as integrated information and communication operations center. s-Korea strategy is propelling 4th industrial platform. Smart cities in Korea showed their own features and trends such as eco-intelligence, high efficiency and low cost oriented IoT, citizen sensored city, big data city. Smart city progress made new production chains fostering ICTs (Information Communication Technologies) and knowledge intermediate inputs to industries.

Keywords: Korean smart cities, Korean smart city strategies, STIM, smart service, infrastructure, technologies, management, effect of smart city

Procedia PDF Downloads 366
971 Promoting Girls’ and Women’s Right to Education: Challenges and Strategies

Authors: Kwizera Mireille, Kharesh Ahmed Al-Khadher

Abstract:

This paper explores the critical issue of girls' and women's right to education, exploring the challenges they face in accessing and benefiting from quality education. Gender disparities in education have persisted globally, hindering social progress and sustainable development. The fundamental importance of education in empowering individuals and promoting gender equality is acknowledged, making it imperative to address the disparities that hinder girls' and women's educational opportunities. The paper discusses various factors contributing to these disparities, including cultural norms(common in third-world countries), socio-economic constraints, and systemic biases. Drawing on a wide range of scholarly sources, empirical studies, and reports from international organizations, this paper highlights the broader societal benefits of educating girls and women, ranging from improved health outcomes to enhanced economic development and greater social and political participation. The paper further outlines strategies and initiatives aimed at overcoming these challenges. These include policy interventions, community-based programs, and international collaborations that work towards eliminating gender-based discrimination in educational settings. The paper emphasizes the significance of not only ensuring access but also fostering an inclusive and safe learning environment that encourages girls and women to thrive academically and personally. By analyzing successful case studies and best practices from around the world, the paper offers insights into effective approaches that can be adopted to enhance girls' and women's right to education globally. Furthermore, it emphasizes the importance of raising awareness of girl's and women's education. In conclusion, this paper underscores the urgency of prioritizing and protecting the educational rights of girls and women's right to education as a fundamental human right and catalyst for gender equality. It calls for a concerted effort from governments, NGOs, educational institutions, and society as a whole to create an equitable and empowering educational landscape that contributes to gender equality and sustainable development.

Keywords: empowerment, gender equality, inclusive education, right to education

Procedia PDF Downloads 68
970 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing

Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang

Abstract:

The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.

Keywords: additive manufacturing, generative design, robot, sustainability

Procedia PDF Downloads 131
969 Treatment of Healthcare Wastewater Using The Peroxi-Photoelectrocoagulation Process: Predictive Models for Chemical Oxygen Demand, Color Removal, and Electrical Energy Consumption

Authors: Samuel Fekadu A., Esayas Alemayehu B., Bultum Oljira D., Seid Tiku D., Dessalegn Dadi D., Bart Van Der Bruggen A.

Abstract:

The peroxi-photoelectrocoagulation process was evaluated for the removal of chemical oxygen demand (COD) and color from healthcare wastewater. A 2-level full factorial design with center points was created to investigate the effect of the process parameters, i.e., initial COD, H₂O₂, pH, reaction time and current density. Furthermore, the total energy consumption and average current efficiency in the system were evaluated. Predictive models for % COD, % color removal and energy consumption were obtained. The initial COD and pH were found to be the most significant variables in the reduction of COD and color in peroxi-photoelectrocoagulation process. Hydrogen peroxide only has a significant effect on the treated wastewater when combined with other input variables in the process like pH, reaction time and current density. In the peroxi-photoelectrocoagulation process, current density appears not as a single effect but rather as an interaction effect with H₂O₂ in reducing COD and color. Lower energy expenditure was observed at higher initial COD, shorter reaction time and lower current density. The average current efficiency was found as low as 13 % and as high as 777 %. Overall, the study showed that hybrid electrochemical oxidation can be applied effectively and efficiently for the removal of pollutants from healthcare wastewater.

Keywords: electrochemical oxidation, UV, healthcare pollutants removals, factorial design

Procedia PDF Downloads 79
968 Selection of Soil Quality Indicators of Rice Cropping Systems Using Minimum Data Set Influenced by Imbalanced Fertilization

Authors: Theresa K., Shanmugasundaram R., Kennedy J. S.

Abstract:

Nutrient supplements are indispensable for raising crops and to reap determining productivity. The nutrient imbalance between replenishment and crop uptake is attempted through the input of inorganic fertilizers. Excessive dumping of inorganic nutrients in soil cause stagnant and decline in yield. Imbalanced N-P-K ratio in the soil exacerbates and agitates the soil ecosystems. The study evaluated the fertilization practices of conventional (CFs), organic and Integrated Nutrient Management system (INM) on soil quality using key indicators and soil quality indices. Twelve rice farming fields of which, ten fields were having conventional cultivation practices, one field each was organic farming based and INM based cultivated under monocropping sequence in the Thondamuthur block of Coimbatore district were fixed and properties viz., physical, chemical and biological were studied for four cropping seasons to determine soil quality index (SQI). SQI was computed for conventional, organic and INM fields. Comparing conventional farming (CF) with organic and INM, CF was recorded with a lower soil quality index. While in organic and INM fields, the higher SQI value of 0.99 and 0.88 respectively were registered. CF₄ received with a super-optimal dose of N (250%) showed a lesser SQI value (0.573) as well as the yield (3.20 t ha⁻¹) and the CF6 which received 125 % N recorded the highest SQI (0.715) and yield (6.20 t ha⁻¹). Likewise, most of the CFs received higher N beyond the level of 125 % except CF₃ and CF₉, which recorded lower yields. CFs which received super-optimal P in the order of CF₆&CF₇>CF₁&CF₁₀ recorded lesser yields except for CF₆. Super-optimal K application also recorded lesser yield in CF₄, CF₇ and CF₉.

Keywords: rice cropping system, soil quality indicators, imbalanced fertilization, yield

Procedia PDF Downloads 157
967 Trauma System in England: An Overview and Future Directions

Authors: Raheel Shakoor Siddiqui, Sanjay Narayana Murthy, Manikandar Srinivas Cheruvu, Kash Akhtar

Abstract:

Major trauma is a dynamic public health epidemic that is continuously evolving. Major trauma care services rely on multi-disciplinary team input involving highly trained pre and in-hospital critical care teams. Pre-hospital critical care teams (PHCCTs), major trauma centres (MTCs), trauma units, and rehabilitation facilities all form an efficient and organised trauma system. England comprises 27 MTCs funded by the National Health Service (NHS). Major trauma care entails enhanced resuscitation protocols coupled with the expertise of dedicated trauma teams and rapid radiological imaging to improve trauma outcomes. Literature reports a change in the demographic of major trauma as elderly patients (silver trauma) with injuries sustained from a fall of 2 metres or less commonly present to services. Evidence of an increasing population age with multiple comorbidities necessitates treatment within the first hour of injury (golden hour) to improve trauma survival outcomes. Staffing and funding pressures within the NHS have subsequently led to a shortfall of available physician-led PHCCTs. Thus, there is a strong emphasis on targeted research and funding to appropriately deploy resources to deprived areas. This review article will discuss the current English trauma system whilst critically appraising present challenges, identifying insufficiencies, and recommending aims for an improved future trauma system in England.

Keywords: trauma, orthopaedics, major trauma, trauma system, trauma network

Procedia PDF Downloads 187
966 Geosynthetic Reinforced Unpaved Road: Literature Study and Design Example

Authors: D. Jayalakshmi, S. S. Bhosale

Abstract:

This paper, in its first part, presents the state-of-the-art literature of design approaches for geosynthetic reinforced unpaved roads. The literature starting since 1970 and the critical appraisal of flexible pavement design by Giroud and Han (2004) and Jonathan Fannin (2006) is presented. The design example is illustrated for Indian conditions. The example emphasizes the results computed by Giroud and Han's (2004) design method with the Indian road congress guidelines by IRC SP 72 -2015. The input data considered are related to the subgrade soil condition of Maharashtra State in India. The unified soil classification of the subgrade soil is inorganic clay with high plasticity (CH), which is expansive with a California bearing ratio (CBR) of 2% to 3%. The example exhibits the unreinforced case and geotextile as reinforcement by varying the rut depth from 25 mm to 100 mm. The present result reveals the base thickness for the unreinforced case from the IRC design catalogs is in good agreement with Giroud and Han (2004) approach for a range of 75 mm to 100 mm rut depth. Since Giroud and Han (2004) method is applicable for both reinforced and unreinforced cases, for the same data with appropriate Nc factor, for the same rut depth, the base thickness for the reinforced case has arrived for the Indian condition. From this trial, for the CBR of 2%, the base thickness reduction due to geotextile inclusion is 35%. For the CBR range of 2% to 5% with different stiffness in geosynthetics, the reduction in base course thickness will be evaluated, and the validation will be executed by the full-scale accelerated pavement testing set up at the College of Engineering Pune (COE), India.

Keywords: base thickness, design approach, equation, full scale accelerated pavement set up, Indian condition

Procedia PDF Downloads 193
965 Assessment of the Implementation of Recommended Teaching and Evaluation Methods of NCE Arabic Language Curriculum in Colleges of Education in North Western Nigeria

Authors: Hamzat Shittu Atunnise

Abstract:

This study on Assessment of the Implementation of Recommended Teaching and Evaluation Methods of the Nigeria Certificate in Education (NCE) Arabic Language Curriculum in Colleges of Education in North Western Nigeria was conducted with four objectives, four research questions and four null hypotheses. Descriptive survey design was used and the multistage sampling procedure adopted. Frequency count and percentage were used to answer research questions and chi-square was used to test all the null hypotheses at an Alpha 0.05 level of significance. Two hundred and ninety one subjects were drawn as sample. Questionnaires were used for data collection. The Context, Input, Process and Product (CIPP) model of evaluation was employed. The study findings indicated that: there were no significant difference in the perceptions of lecturers and students from Federal and State Colleges of Education on the following: extent of which lecturers employ appropriate methods in teaching the language and extent of which recommended evaluation methods are utilized for the implementation of Arabic Curriculum. Based on these findings, it was recommended among other things that: lecturers should adopt teaching methodologies that promote interactive learning; Governments should ensure that information and communication technology facilities are made available and usable in all Colleges of Education; Lecturers should vary their evaluation methods because other methods of evaluation can meet and surpass the level of learning and understanding which essay type questions are believed to create and that language labs should be used in teaching Arabic in Colleges of Education because comprehensive language learning is possible through both classroom and language lab teaching.

Keywords: assessment, arabic language, curriculum, methods of teaching, evaluation methods, NCE

Procedia PDF Downloads 60
964 The Effect of the Environmental Activities of Organizations on Financial Performance

Authors: Fatemeh Khalili Varnamkhasti

Abstract:

Natural administration has outside impacts such that companies regularly respect natural input as a fetched with no clear advantage. In this manner, in case natural security can bring financial benefits, showing that natural security and financial interface are in concordance, companies will effectively fulfill their obligation to ensure the environment. Contamination is, for the most part, related to the squandering of assets, misplaced vitality, and crude materials not completely utilized. Contamination avoidance and clean innovation, as inner organizational hones, can offer assistance to play down taken toll and to develop economic aptitudes for the long run, whereas outside organizational hones (item stewardship and maintainability vision) can offer assistance to coordinated partner sees into trade operations and to define future commerce directions. Taken together, these practices can drive shareholder esteem while at the same time contributing to a more feasible world. On the off chance that the company's budgetary execution is nice, it'll draw in financial specialists to contribute and progress the company's execution. In this way, budgetary execution is additionally the determinant of the progression of a company. This can be because the monetary back gotten by the company gets to be the premise for the running of trade forms in the future. Moreover, A green picture can assist firms in pulling in more clients by influencing shopper choices and moving forward with buyer brand dependability. Numerous shoppers need to purchase items from ecologically inviting firms, in spite of the fact that there are, of course, a few who will not pay premium costs for green items.

Keywords: environmental activities, financial performanance, advantage, clients

Procedia PDF Downloads 57
963 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process

Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum

Abstract:

Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.

Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact

Procedia PDF Downloads 197
962 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 467
961 Effect of Knowledge of Bubble Point Pressure on Estimating PVT Properties from Correlations

Authors: Ahmed El-Banbi, Ahmed El-Maraghi

Abstract:

PVT properties are needed as input data in all reservoir, production, and surface facilities engineering calculations. In the absence of PVT reports on valid reservoir fluid samples, engineers rely on PVT correlations to generate the required PVT data. The accuracy of PVT correlations varies, and no correlation group has been found to provide accurate results for all oil types. The effect of inaccurate PVT data can be significant in engineering calculations and is well documented in the literature. Bubble point pressure can sometimes be obtained from external sources. In this paper, we show how to utilize the known bubble point pressure to improve the accuracy of calculated PVT properties from correlations. We conducted a systematic study using around 250 reservoir oil samples to quantify the effect of pre-knowledge of bubble point pressure. The samples spanned a wide range of oils, from very volatile oils to black oils and all the way to low-GOR oils. A method for shifting both undersaturated and saturated sections of the PVT properties curves to the correct bubble point is explained. Seven PVT correlation families were used in this study. All PVT properties (e.g., solution gas-oil ratio, formation volume factor, density, viscosity, and compressibility) were calculated using the correct bubble point pressure and the correlation estimated bubble point pressure. Comparisons between the calculated PVT properties and actual laboratory-measured values were made. It was found that pre-knowledge of bubble point pressure and using the shifting technique presented in the paper improved the correlation-estimated values by 10% to more than 30%. The most improvement was seen in the solution gas-oil ratio and formation volume factor.

Keywords: PVT data, PVT properties, PVT correlations, bubble point pressure

Procedia PDF Downloads 63
960 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science

Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier

Abstract:

Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and compared

Keywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis

Procedia PDF Downloads 117
959 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities

Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos

Abstract:

The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.

Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification

Procedia PDF Downloads 476
958 Design and Development of Power Sources for Plasma Actuators to Control Flow Separation

Authors: Himanshu J. Bahirat, Apoorva S. Janawlekar

Abstract:

Plasma actuators are essential for aerodynamic flow separation control due to their lack of mechanical parts, lightweight, and high response frequency, which have numerous applications in hypersonic or supersonic aircraft. The working of these actuators is based on the formation of a low-temperature plasma between a pair of parallel electrodes by the application of a high-voltage AC signal across the electrodes, after which air molecules from the air surrounding the electrodes are ionized and accelerated through the electric field. The high-frequency operation is required in dielectric discharge barriers to ensure plasma stability. To carry out flow separation control in a hypersonic flow, the optimal design and construction of a power supply to generate dielectric barrier discharges is carried out in this paper. In this paper, it is aspired to construct a simplified circuit topology to emulate the dielectric barrier discharge and study its various frequency responses. The power supply can generate high voltage pulses up to 20kV at the repetitive frequency range of 20-50kHz with an input power of 500W. The power supply has been designed to be short circuit proof and can endure variable plasma load conditions. Its general outline is to charge a capacitor through a half-bridge converter and then later discharge it through a step-up transformer at a high frequency in order to generate high voltage pulses. After simulating the circuit, the PCB design and, eventually, lab tests are carried out to study its effectiveness in controlling flow separation.

Keywords: aircraft propulsion, dielectric barrier discharge, flow separation control, power source

Procedia PDF Downloads 126
957 Aeroacoustics Investigations of Unsteady 3D Airfoil for Different Angle Using Computational Fluid Dynamics Software

Authors: Haydar Kepekçi, Baha Zafer, Hasan Rıza Güven

Abstract:

Noise disturbance is one of the major factors considered in the fast development of aircraft technology. This paper reviews the flow field, which is examined on the 2D NACA0015 and 3D NACA0012 blade profile using SST k-ω turbulence model to compute the unsteady flow field. We inserted the time-dependent flow area variables in Ffowcs-Williams and Hawkings (FW-H) equations as an input and Sound Pressure Level (SPL) values will be computed for different angles of attack (AoA) from the microphone which is positioned in the computational domain to investigate effect of augmentation of unsteady 2D and 3D airfoil region noise level. The computed results will be compared with experimental data which are available in the open literature. As results; one of the calculated Cp is slightly lower than the experimental value. This difference could be due to the higher Reynolds number of the experimental data. The ANSYS Fluent software was used in this study. Fluent includes well-validated physical modeling capabilities to deliver fast, accurate results across the widest range of CFD and multiphysics applications. This paper includes a study which is on external flow over an airfoil. The case of 2D NACA0015 has approximately 7 million elements and solves compressible fluid flow with heat transfer using the SST turbulence model. The other case of 3D NACA0012 has approximately 3 million elements.

Keywords: 3D blade profile, noise disturbance, aeroacoustics, Ffowcs-Williams and Hawkings (FW-H) equations, k-ω-SST turbulence model

Procedia PDF Downloads 212
956 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh

Authors: Zahid Khalil, Saad Ul Haque, Asif Khan

Abstract:

Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).

Keywords: Remote sensing, GIS, AHP, RWH

Procedia PDF Downloads 389
955 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis

Authors: Akinola Ikudayisi, Josiah Adeyemo

Abstract:

The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.

Keywords: irrigation, principal component analysis, reference evapotranspiration, Vaalharts

Procedia PDF Downloads 258
954 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 78
953 Radiation Protection and Licensing for an Experimental Fusion Facility: The Italian and European Approaches

Authors: S. Sandri, G. M. Contessa, C. Poggi

Abstract:

An experimental nuclear fusion device could be seen as a step toward the development of the future nuclear fusion power plant. If compared with other possible solutions to the energy problem, nuclear fusion has advantages that ensure sustainability and security. In particular considering the radioactivity and the radioactive waste produced, in a nuclear fusion plant the component materials could be selected in order to limit the decay period, making it possible the recycling in a new reactor after about 100 years from the beginning of the decommissioning. To achieve this and other pertinent goals many experimental machines have been developed and operated worldwide in the last decades, underlining that radiation protection and workers exposure are critical aspects of these facilities due to the high flux, high energy neutrons produced in the fusion reactions. Direct radiation, material activation, tritium diffusion and other related issues pose a real challenge to the demonstration that these devices are safer than the nuclear fission facilities. In Italy, a limited number of fusion facilities have been constructed and operated since 30 years ago, mainly at the ENEA Frascati Center, and the radiation protection approach, addressed by the national licensing requirements, shows that it is not always easy to respect the constraints for the workers' exposure to ionizing radiation. In the current analysis, the main radiation protection issues encountered in the Italian Fusion facilities are considered and discussed, and the technical and legal requirements are described. The licensing process for these kinds of devices is outlined and compared with that of other European countries. The following aspects are considered throughout the current study: i) description of the installation, plant and systems, ii) suitability of the area, buildings, and structures, iii) radioprotection structures and organization, iv) exposure of personnel, v) accident analysis and relevant radiological consequences, vi) radioactive wastes assessment and management. In conclusion, the analysis points out the needing of a special attention to the radiological exposure of the workers in order to demonstrate at least the same level of safety as that reached at the nuclear fission facilities.

Keywords: fusion facilities, high energy neutrons, licensing process, radiation protection

Procedia PDF Downloads 352
952 Development of Probability Distribution Models for Degree of Bending (DoB) in Chord Member of Tubular X-Joints under Bending Loads

Authors: Hamid Ahmadi, Amirreza Ghaffari

Abstract:

Fatigue life of tubular joints in offshore structures is not only dependent on the value of hot-spot stress, but is also significantly influenced by the through-the-thickness stress distribution characterized by the degree of bending (DoB). The DoB exhibits considerable scatter calling for greater emphasis in accurate determination of its governing probability distribution which is a key input for the fatigue reliability analysis of a tubular joint. Although the tubular X-joints are commonly found in offshore jacket structures, as far as the authors are aware, no comprehensive research has been carried out on the probability distribution of the DoB in tubular X-joints. What has been used so far as the probability distribution of the DoB in reliability analyses is mainly based on assumptions and limited observations, especially in terms of distribution parameters. In the present paper, results of parametric equations available for the calculation of the DoB have been used to develop probability distribution models for the DoB in the chord member of tubular X-joints subjected to four types of bending loads. Based on a parametric study, a set of samples was prepared and density histograms were generated for these samples using Freedman-Diaconis method. Twelve different probability density functions (PDFs) were fitted to these histograms. The maximum likelihood method was utilized to determine the parameters of fitted distributions. In each case, Kolmogorov-Smirnov test was used to evaluate the goodness of fit. Finally, after substituting the values of estimated parameters for each distribution, a set of fully defined PDFs have been proposed for the DoB in tubular X-joints subjected to bending loads.

Keywords: tubular X-joint, degree of bending (DoB), probability density function (PDF), Kolmogorov-Smirnov goodness-of-fit test

Procedia PDF Downloads 719
951 Adaptive Assemblies: A Scalable Solution for Atlanta's Affordable Housing Crisis

Authors: Claudia Aguilar, Amen Farooq

Abstract:

Among other cities in the United States, the city of Atlanta is experiencing levels of growth that surpass anything we have witnessed in the last century. With the surge of population influx, the available housing is practically bursting at the seams. Supply is low, and demand is high. In effect, the average one-bedroom apartment runs for 1,800 dollars per month. The city is desperately seeking new opportunities to provide affordable housing at an expeditious rate. This has been made evident by the recent updates to the city’s zoning. With the recent influx in the housing market, young professionals, in particular millennials, are desperately looking for alternatives to stay within the city. To remedy Atlanta’s affordable housing crisis, the city of Atlanta is planning to introduce 40 thousand of new affordable housing units by 2026. To achieve the urgent need for more affordable housing, the architectural response needs to adapt to overcome this goal. A method that has proven successful in modern housing is to practice modular means of development. A method that has been constrained to the dimensions of the max load for an eighteen-wheeler. This approach has diluted the architect’s ability to produce site-specific, informed design and rather contributes to the “cookie cutter” stigma that the method has been labeled with. This thesis explores the design methodology for modular housing by revisiting its constructability and adaptability. This research focuses on a modular housing type that could break away from the constraints of transport and deliver adaptive reconfigurable assemblies. The adaptive assemblies represent an integrated design strategy for assembling the future of affordable dwelling units. The goal is to take advantage of a component-based system and explore a scalable solution to modular housing. This proposal aims specifically to design a kit of parts that are made to be easily transported and assembled but also gives the ability to customize the use of components to benefit all unique conditions. The benefits of this concept could include decreased construction time, cost, on-site labor, and disruption while providing quality housing with affordable and flexible options.

Keywords: adaptive assemblies, modular architecture, adaptability, constructibility, kit of parts

Procedia PDF Downloads 85
950 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 469
949 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV

Authors: Maria Pavlova

Abstract:

In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.

Keywords: camera, object recognition, OpenCV, Raspberry

Procedia PDF Downloads 218